



Multilingual MoE text embedding model excelling at multilingual retrieval with SoTA performance compared to ~300M parameter models, supporting ~100 languages with Matryoshka Embeddings trained on 1.6B pairs.
nomic-embed-text-v2-moe is a multilingual MoE (Mixture of Experts) text embedding model that excels at multilingual retrieval. It offers high performance with state-of-the-art multilingual performance compared to ~300M parameter models.
Supports approximately 100 languages, providing robust multilingual, cross-lingual, and code retrieval capabilities.
Trained on over 1.6 billion pairs, ensuring comprehensive coverage across languages and domains.
Supports flexible embedding dimensions through Matryoshka Embeddings, allowing you to truncate vectors to smaller dimensions without retraining.
Offers state-of-the-art multilingual performance compared to models with around 300M parameters, making it highly efficient for its size.
Available through Ollama, allowing you to:
Free and open-source, runs locally through Ollama.
Loading more......