• Home
  • Categories
  • Pricing
  • Submit
    Built with
    Ever Works
    Ever Works

    Connect with us

    Stay Updated

    Get the latest updates and exclusive content delivered to your inbox.

    Product

    • Categories
    • Pricing
    • Help

    Clients

    • Sign In
    • Register
    • Forgot password?

    Company

    • About Us
    • Admin
    • Sitemap

    Resources

    • Blog
    • Submit
    • API Documentation
    All product names, logos, and brands are the property of their respective owners. All company, product, and service names used in this repository, related repositories, and associated websites are for identification purposes only. The use of these names, logos, and brands does not imply endorsement, affiliation, or sponsorship. This directory may include content generated by artificial intelligence.
    Copyright © 2025 Awesome Vector Databases. All rights reserved.·Terms of Service·Privacy Policy·Cookies
    Decorative pattern
    Decorative pattern
    1. Home
    2. Vector Database Engines
    3. EmbeddixDB

    EmbeddixDB

    High-performance vector database designed for LLM memory and RAG applications. Provides an MCP (Model Context Protocol) server for seamless integration with AI assistants like Claude, and a REST API for traditional applications. Supports HNSW and flat indexes with 256x memory compression via quantization, flexible storage backends, auto-embedding, and advanced analytics.

    Surveys

    Loading more......

    Information

    Websitegithub.com
    PublishedApr 7, 2026

    Categories

    1 Item
    Vector Database Engines

    Tags

    4 Items
    #open-source#hnsw#rag#mcp

    Similar Products

    6 result(s)

    Trieve

    Trieve provides an all-in-one infrastructure for vector search, recommendations, retrieval-augmented generation (RAG), and analytics, accessible via API for seamless integration.

    micro-hnsw-wasm

    WASM library for brain-inspired neuromorphic HNSW vector search in 11.8KB. Optimized for edge devices with spiking neurons for energy-efficient similarity search.

    ruvector-core

    Core Rust crate for the RuVector vector database engine featuring HNSW indexing, SIMD acceleration, and adaptive compression for high-performance similarity search. Supports multi-threaded queries achieving up to 3,597 QPS with 100% recall on 50K vectors. Ideal for AI applications requiring low-latency retrieval in RAG pipelines and agent memory systems.

    RAGFlow

    An open-source RAG engine that provides end-to-end document understanding with automated chunking, embedding, retrieval and generation pipeline, supporting multiple document formats and LLM backends.

    Unstructured

    Open-source library for preprocessing unstructured documents (PDFs, Word, HTML, images) for RAG and LLM applications. Handles extraction, chunking, and cleaning of diverse document types.

    Canopy

    Open-source Retrieval Augmented Generation (RAG) framework and context engine powered by Pinecone, providing automatic chunking, embedding, chat history management, and query optimization.

    Overview

    EmbeddixDB is a vector database optimized for AI applications, featuring both MCP server for AI assistants and REST API.

    Features

    • MCP Server: Direct integration with Claude and other AI assistants via Model Context Protocol
    • Vector Search: HNSW and flat indexes with 256x memory compression via quantization
    • Flexible Storage: In-memory, BoltDB, or BadgerDB persistence backends
    • Auto-Embedding: Automatic text-to-vector conversion with Ollama or ONNX models
    • Advanced Analytics: Sentiment analysis, entity extraction, and topic modeling
    • Real-time Operations: Live vector insertion, updates, and deletion
    • High Performance: ~65,000 queries/sec on M1 MacBook Pro; on M4 Pro: ~25,374 queries/sec search, ~32,113 vectors/sec insert

    Available MCP Tools:

    • create_collection
    • add_vectors
    • search_vectors
    • get_vector
    • delete_vector
    • list_collections
    • delete_collection

    REST API endpoints for collections, documents, search, with OpenAPI docs.

    Architecture

    • MCP Server: Stdio-based implementing Model Context Protocol
    • Vector Store: HNSW and flat indexes with quantization
    • Persistence: Pluggable backends (Memory, BoltDB, BadgerDB)
    • AI Integration: ONNX Runtime and Ollama for embeddings
    • REST API: HTTP API with OpenAPI documentation

    Use Cases

    • LLM Memory Storage: Store conversation history, user preferences
    • RAG: Index documents and retrieve relevant chunks
    • Tool Learning: Track successful tool usage patterns

    Pricing

    Free and open-source under the MIT License.