

Accelerated inference microservices that allow organizations to run AI models on NVIDIA GPUs anywhere with optimized inference engines, industry-standard APIs, and runtime dependencies in enterprise-grade containers.
Loading more......
NVIDIA NIM is a set of accelerated inference microservices that allow organizations to run AI models on NVIDIA GPUs anywhere—in the cloud, data center, workstations, and PCs.
NIM microservices come with everything AI teams need—the latest AI foundation models, optimized inference engines, industry-standard APIs, and runtime dependencies—prepackaged in enterprise-grade software containers ready to deploy and scale anywhere.
NIM microservices expose industry-standard APIs for simple integration into AI applications, development frameworks, and workflows and optimize response latency and throughput for each combination of foundation model and GPU.
NVIDIA announced NIM microservices for AI models that can generate OpenUSD language to answer user queries, generate OpenUSD Python code, apply materials to 3D objects, and understand 3D space and physics to help accelerate digital twin development. This announcement was made in January 2026.
The new USD-focused NIM microservices include:
Foxconn, a global manufacturing leader with more than 170 factories worldwide, is already benefiting from NVIDIA's computing platform, using NIM microservices and Omniverse for their operations.
Enterprise licensing, contact NVIDIA for details.