• Home
  • Categories
  • Tags
  • Pricing
  • Submit
    Welcome to Ever Works

    The Excellence
    Directory Platform Template

    This is a demo directory website built with Ever Works

    Decorative pattern

    Categories

    Active Filters

    Sort By

    Tags

    Tags

    1 tag
    RAGAS
    Featured

    Research-backed RAG evaluation framework providing metrics for context precision, recall, faithfulness, and response relevancy to objectively measure LLM application performance.

    MTEB Leaderboard
    Featured

    Massive Text Embedding Benchmark leaderboard covering 58 datasets across 112 languages and 8 embedding tasks. Industry-standard benchmark for comparing text embedding models.

    Context Precision

    RAG evaluation metric assessing retriever's ability to rank relevant chunks higher than irrelevant ones, measuring context relevance and ranking quality for optimal retrieval.

    Context Recall

    RAG evaluation metric measuring whether retrieved context contains all information required to produce ideal output, assessing completeness and sufficiency of retrieval.

    Faithfulness

    RAG evaluation metric measuring whether generated answers accurately align with retrieved context without hallucination, ensuring factual grounding of LLM responses.

    VectorDBBench

    The open‑source repository containing the implementation, configuration, and scripts of VectorDBBench, enabling users to run standardized benchmarks across multiple vector database systems locally or in CI.

    SISAP Indexing Challenge

    An annual competition focused on similarity search and indexing algorithms, including approximate nearest neighbor methods and high-dimensional vector indexing, providing benchmarks and results relevant to vector database research.

    ANN-Benchmarks

    ANN-Benchmarks is a benchmarking platform specifically for evaluating the performance of approximate nearest neighbor (ANN) search algorithms, which are foundational to vector database evaluation and comparison.

    BEIR

    BEIR (Benchmarking IR) is a benchmark suite for evaluating information retrieval and vector search systems across multiple tasks and datasets. Useful for comparing vector database performance.

    IntelLabs's Vector Search Datasets

    A collection of datasets curated by Intel Labs specifically for evaluating and benchmarking vector search algorithms and databases.

    MTEB: Massive Text Embedding Benchmark

    A massive text embedding benchmark for evaluating the quality of text embedding models, crucial for vector database applications.

    Decorative pattern
    Built with
    Ever Works
    Ever Works

    Connect with us

    Stay Updated

    Get the latest updates and exclusive content delivered to your inbox.

    Product

    • Categories
    • Tags
    • Pricing
    • Help

    Clients

    • Sign In
    • Register
    • Forgot password?

    Company

    • About Us
    • Admin
    • Sitemap

    Resources

    • Blog
    • Submit
    • API Documentation
    All product names, logos, and brands are the property of their respective owners. All company, product, and service names used in this repository, related repositories, and associated websites are for identification purposes only. The use of these names, logos, and brands does not imply endorsement, affiliation, or sponsorship. This directory may include content generated by artificial intelligence.
    Copyright © 2025 Awesome Vector Databases. All rights reserved.·Terms of Service·Privacy Policy·Cookies