• Home
  • Categories
  • Pricing
  • Submit
    Built with
    Ever Works
    Ever Works

    Connect with us

    Stay Updated

    Get the latest updates and exclusive content delivered to your inbox.

    Product

    • Categories
    • Pricing
    • Help

    Clients

    • Sign In
    • Register
    • Forgot password?

    Company

    • About Us
    • Admin
    • Sitemap

    Resources

    • Blog
    • Submit
    • API Documentation
    All product names, logos, and brands are the property of their respective owners. All company, product, and service names used in this repository, related repositories, and associated websites are for identification purposes only. The use of these names, logos, and brands does not imply endorsement, affiliation, or sponsorship. This directory may include content generated by artificial intelligence.
    Copyright © 2025 Awesome Vector Databases. All rights reserved.·Terms of Service·Privacy Policy·Cookies
    Decorative pattern
    Decorative pattern
    1. Home
    2. Machine Learning Models
    3. Jina Reranker v2

    Jina Reranker v2

    Transformer-based cross-encoder model fine-tuned for text reranking with Flash Attention 2 architecture. Features multilingual support for 100+ languages, function-calling capabilities, code search, and 6x speedup over v1 with only 278M parameters.

    Surveys

    Loading more......

    Information

    Websitejina.ai
    PublishedMar 26, 2026

    Categories

    1 Item
    Machine Learning Models

    Tags

    3 Items
    #reranker#multilingual#cross-encoder

    Similar Products

    6 result(s)

    Cohere Rerank v3.5

    State-of-the-art foundational model for ranking with 4096 context length and multilingual support for 100+ languages. Offers exceptional performance on BEIR benchmarks and specialized domains including finance, e-commerce, and enterprise search.

    Featured

    mxbai-rerank-base-v2

    A 0.5B parameter reranking model by Mixedbread AI that provides an excellent balance of speed and accuracy, supporting 100+ languages and processing up to 8K tokens with reinforcement learning training for enhanced search relevance.

    MS MARCO Cross-Encoder

    Popular cross-encoder reranker models trained on MS MARCO dataset for semantic search, providing superior accuracy in re-ranking the top results from bi-encoder retrieval systems.

    BGE-M3

    A versatile embedding model from BAAI that simultaneously supports dense retrieval, sparse retrieval, and multi-vector retrieval, with multilingual support for 100+ languages and multi-granularity processing from short sentences to 8192-token documents.

    Featured

    Qwen3 Embedding

    Multilingual embedding model supporting over 100 languages and ranking #1 on MTEB multilingual leaderboard. Offers flexible model sizes from 0.6B to 8B parameters with user-defined instructions.

    Featured

    Nomic Embed Text

    First fully reproducible open-source text embedding model with 8,192 context length. v2 introduces Mixture-of-Experts architecture for multilingual embeddings. Outperforms OpenAI models on benchmarks. This is an OSS model under Apache 2.0 license.

    Featured

    Overview

    The Jina Reranker v2 (jina-reranker-v2-base-multilingual) is a transformer-based model that has been fine-tuned for text reranking task, which is a crucial component in many information retrieval systems.

    Architecture

    It is a cross-encoder model that takes a query and a document pair as input and outputs a score indicating the relevance of the document to the query. The model employs a cross-encoder architecture enhanced with Flash Attention 2, enabling direct comparison between queries and documents for more accurate relevance assessment.

    Key Capabilities

    Multilingual Support

    Features function-calling support, multilingual retrieval for over 100 languages, code search capabilities, and offers a 6x speedup over v1.

    Performance

    The model reaches state-of-the-art performance in accuracy with only 278M parameters. Compared to bge-reranker-v2-m3 with 567M parameters, Jina Reranker v2 is only half the size.

    Long Text Handling

    The model is capable of handling long texts with a context length of up to 1024 tokens, and uses a sliding window approach to chunk the input text into smaller pieces and rerank each chunk separately for texts exceeding this limit.

    Benchmark Performance

    The Jina Reranker v2 model has demonstrated competitiveness across a series of benchmarks targeting text retrieval, multilingual capability, function-calling-aware and text-to-SQL-aware reranking, and code retrieval tasks.

    Integration & Availability

    The model is available on Hugging Face and can be integrated with frameworks like sentence-transformers, LangChain, and Haystack, or accessed via Jina AI's Reranker API.

    Use Cases

    • Agentic RAG applications
    • Multilingual search systems
    • Code search and retrieval
    • Function-calling systems
    • Text-to-SQL applications

    Pricing

    Available through Jina AI's API with consumption-based pricing.