• Home
  • Categories
  • Pricing
  • Submit
    Built with
    Ever Works
    Ever Works

    Connect with us

    Stay Updated

    Get the latest updates and exclusive content delivered to your inbox.

    Product

    • Categories
    • Pricing
    • Help

    Clients

    • Sign In
    • Register
    • Forgot password?

    Company

    • About Us
    • Admin
    • Sitemap

    Resources

    • Blog
    • Submit
    • API Documentation
    All product names, logos, and brands are the property of their respective owners. All company, product, and service names used in this repository, related repositories, and associated websites are for identification purposes only. The use of these names, logos, and brands does not imply endorsement, affiliation, or sponsorship. This directory may include content generated by artificial intelligence.
    Copyright © 2025 Awesome Vector Databases. All rights reserved.·Terms of Service·Privacy Policy·Cookies
    Decorative pattern
    Decorative pattern
    1. Home
    2. Machine Learning Models
    3. Cohere Rerank v3.5

    Cohere Rerank v3.5

    State-of-the-art foundational model for ranking with 4096 context length and multilingual support for 100+ languages. Offers exceptional performance on BEIR benchmarks and specialized domains including finance, e-commerce, and enterprise search.

    Overview

    Cohere Rerank 3.5 is Cohere's newest and most performant foundational model for ranking, with a context length of 4096 and state-of-the-art performance on multilingual retrieval tasks and reasoning capabilities.

    Multilingual Support

    The model can comprehend and analyze enterprise data and user questions across over 100 languages including Arabic, Chinese, English, French, German, Hindi, Japanese, Korean, Portuguese, Russian, and Spanish. It offers state-of-the-art accuracy on these 10 global business languages.

    Key Features

    • Context length of 4096 tokens
    • SOTA performance on BEIR and domains such as Finance, E-commerce, Hospitality, Project Management, and Email/Messaging Retrieval tasks
    • Single multilingual model (rerank-v3.5), unlike version 3.0 which had separate English-only and multilingual models
    • V2 API with enhanced capabilities

    Performance

    Achieves state-of-the-art results on standard benchmarks:

    • Best performance on multilingual retrieval tasks
    • Superior reasoning capabilities for complex queries
    • Specialized optimization for enterprise domains

    API and Availability

    Along with the model, Cohere released V2 of the Rerank API. The model is available on multiple platforms:

    • Amazon Bedrock
    • Azure AI Foundry
    • Oracle Cloud
    • Heroku
    • Cohere Platform

    Use Cases

    • Enterprise search across multilingual content
    • E-commerce product ranking
    • Financial document retrieval
    • Customer support systems
    • Email and messaging search
    • RAG (Retrieval Augmented Generation) applications

    Pricing (2026)

    Rerank 3.5 costs $2.00 per 1,000 searches, where a single search counts as one query with up to 100 documents to rank.

    Integration

    Easy integration with popular frameworks including LangChain, LlamaIndex, and Haystack.

    Surveys

    Loading more......

    Information

    Websitedocs.cohere.com
    PublishedMar 26, 2026

    Categories

    1 Item
    Machine Learning Models

    Tags

    3 Items
    #reranker#multilingual#enterprise

    Similar Products

    6 result(s)

    Jina Reranker v2

    Transformer-based cross-encoder model fine-tuned for text reranking with Flash Attention 2 architecture. Features multilingual support for 100+ languages, function-calling capabilities, code search, and 6x speedup over v1 with only 278M parameters.

    mxbai-rerank-base-v2

    A 0.5B parameter reranking model by Mixedbread AI that provides an excellent balance of speed and accuracy, supporting 100+ languages and processing up to 8K tokens with reinforcement learning training for enhanced search relevance.

    BGE-M3

    A versatile embedding model from BAAI that simultaneously supports dense retrieval, sparse retrieval, and multi-vector retrieval, with multilingual support for 100+ languages and multi-granularity processing from short sentences to 8192-token documents.

    Featured

    Qwen3 Embedding

    Multilingual embedding model supporting over 100 languages and ranking #1 on MTEB multilingual leaderboard. Offers flexible model sizes from 0.6B to 8B parameters with user-defined instructions.

    Featured

    Nomic Embed Text

    First fully reproducible open-source text embedding model with 8,192 context length. v2 introduces Mixture-of-Experts architecture for multilingual embeddings. Outperforms OpenAI models on benchmarks. This is an OSS model under Apache 2.0 license.

    Featured

    mGTE

    Generalized long-context text representation and reranking models from Alibaba supporting 75 languages and context length up to 8192. Built on transformer++ encoder with RoPE and GLU for enhanced multilingual retrieval.