• Home
  • Categories
  • Pricing
  • Submit
    Built with
    Ever Works
    Ever Works

    Connect with us

    Stay Updated

    Get the latest updates and exclusive content delivered to your inbox.

    Product

    • Categories
    • Pricing
    • Help

    Clients

    • Sign In
    • Register
    • Forgot password?

    Company

    • About Us
    • Admin
    • Sitemap

    Resources

    • Blog
    • Submit
    • API Documentation
    All product names, logos, and brands are the property of their respective owners. All company, product, and service names used in this repository, related repositories, and associated websites are for identification purposes only. The use of these names, logos, and brands does not imply endorsement, affiliation, or sponsorship. This directory may include content generated by artificial intelligence.
    Copyright © 2025 Awesome Vector Databases. All rights reserved.·Terms of Service·Privacy Policy·Cookies
    Decorative pattern
    Decorative pattern
    1. Home
    2. Machine Learning Models
    3. voyage-4

    voyage-4

    Latest Voyage AI embedding model family featuring shared embedding space with MoE architecture, supporting flexible output dimensions and advanced quantization options for cost optimization.

    Overview

    The Voyage 4 model family represents the latest generation of embedding models from Voyage AI, featuring a shared embedding space with Mixture-of-Experts (MoE) architecture released in January 2026.

    Models

    • voyage-4-large: Highest accuracy for demanding applications
    • voyage-4: Balanced performance and efficiency
    • voyage-4-lite: Optimized for speed and cost

    Features

    • Shared embedding space across all model sizes
    • MoE architecture for improved efficiency
    • Flexible output dimensions: 2048, 1024, 512, and 256
    • Multiple quantization options:
      • 32-bit floating point
      • Signed and unsigned 8-bit integer
      • Binary precision
    • Matryoshka learning support
    • Minimal quality loss with quantization

    Performance

    • State-of-the-art accuracy across benchmarks
    • Optimized for production deployments
    • Significant cost savings through quantization

    Pricing

    Tiered pricing based on model size and usage volume

    Surveys

    Loading more......

    Information

    Websiteblog.voyageai.com
    PublishedMar 10, 2026

    Categories

    1 Item
    Machine Learning Models

    Tags

    3 Items
    #embeddings#multilingual#quantization

    Similar Products

    6 result(s)

    voyage-3-large

    State-of-the-art general-purpose and multilingual embedding model from Voyage AI that ranks first across eight domains spanning 100 datasets, outperforming OpenAI and Cohere models by significant margins.

    Featured

    Qwen3 Embedding

    Multilingual embedding model supporting over 100 languages and ranking #1 on MTEB multilingual leaderboard. Offers flexible model sizes from 0.6B to 8B parameters with user-defined instructions.

    Featured

    Cohere Embed Multilingual v3

    High-performance multilingual embedding model from Cohere supporting 100+ languages with 1024 dimensions, optimized for semantic search, RAG, and cross-lingual retrieval tasks.

    Mistral Embed

    State-of-the-art embedding model from Mistral AI that generates 1024-dimensional vectors for text, supporting semantic search, clustering, and retrieval-augmented generation applications.

    BGE-M3

    A versatile multilingual text embedding model from BAAI that supports 100+ languages and can handle inputs up to 8192 tokens. BGE-M3 is unique in supporting three retrieval methods simultaneously: dense retrieval, multi-vector retrieval, and sparse retrieval.

    gte-Qwen2-1.5B-instruct

    A state-of-the-art multilingual text embedding model from Alibaba's GTE (General Text Embedding) series, built on the Qwen2-1.5B LLM. The model supports up to 8192 tokens and incorporates bidirectional attention mechanisms for enhanced contextual understanding across diverse domains.