• Home
  • Categories
  • Pricing
  • Submit
    Built with
    Ever Works
    Ever Works

    Connect with us

    Stay Updated

    Get the latest updates and exclusive content delivered to your inbox.

    Product

    • Categories
    • Pricing
    • Help

    Clients

    • Sign In
    • Register
    • Forgot password?

    Company

    • About Us
    • Admin
    • Sitemap

    Resources

    • Blog
    • Submit
    • API Documentation
    All product names, logos, and brands are the property of their respective owners. All company, product, and service names used in this repository, related repositories, and associated websites are for identification purposes only. The use of these names, logos, and brands does not imply endorsement, affiliation, or sponsorship. This directory may include content generated by artificial intelligence.
    Copyright © 2025 Awesome Vector Databases. All rights reserved.·Terms of Service·Privacy Policy·Cookies
    Decorative pattern
    Decorative pattern
    1. Home
    2. Llm Tools
    3. NVIDIA NIM

    NVIDIA NIM

    Accelerated inference microservices that allow organizations to run AI models on NVIDIA GPUs anywhere with optimized inference engines, industry-standard APIs, and runtime dependencies in enterprise-grade containers.

    Surveys

    Loading more......

    Information

    Websitewww.nvidia.com
    PublishedMar 24, 2026

    Categories

    1 Item
    Llm Tools

    Tags

    3 Items
    #inference#microservices#gpu

    Overview

    NVIDIA NIM is a set of accelerated inference microservices that allow organizations to run AI models on NVIDIA GPUs anywhere—in the cloud, data center, workstations, and PCs.

    Key Features

    NIM microservices come with everything AI teams need—the latest AI foundation models, optimized inference engines, industry-standard APIs, and runtime dependencies—prepackaged in enterprise-grade software containers ready to deploy and scale anywhere.

    NIM microservices expose industry-standard APIs for simple integration into AI applications, development frameworks, and workflows and optimize response latency and throughput for each combination of foundation model and GPU.

    Recent 2026 Developments

    NVIDIA announced NIM microservices for AI models that can generate OpenUSD language to answer user queries, generate OpenUSD Python code, apply materials to 3D objects, and understand 3D space and physics to help accelerate digital twin development. This announcement was made in January 2026.

    The new USD-focused NIM microservices include:

    • USD Code NIM microservice — answers general knowledge OpenUSD questions and automatically generates OpenUSD-Python code based on text prompts
    • USD Search NIM microservice — enables developers to search through massive libraries of OpenUSD, 3D and image data using natural language or image inputs
    • USD Layout, USD SmartMaterial, and various fVDB (physics and rendering) microservices

    Enterprise Adoption

    Foxconn, a global manufacturing leader with more than 170 factories worldwide, is already benefiting from NVIDIA's computing platform, using NIM microservices and Omniverse for their operations.

    Pricing

    Enterprise licensing, contact NVIDIA for details.