QEMU microVM launcher for low-latency RVF cognitive containers in RuVector stack, enabling secure on-device vector processing for edge AI environments.
Loading more......
nano-vectordb-rs
Minimal Rust library for fast on-device cosine similarity search with Rayon parallelism and embedded persistence, ideal for low-latency prototyping on edge hardware. Supports quick inserts/queries for real-time AI; lighter than full DBs like Qdrant edge.
RuVector
Self-optimizing on-device vector database with HNSW, graph RAG, and WASM deployment for low-latency edge AI ops across browsers/IoT/mobile. Supports real-time self-learning retrieval; lighter and offline vs cloud Qdrant.
ruvector-core
Rust core for high-performance on-device HNSW vector search with SIMD and compression, achieving low-latency multi-threaded queries for edge AI RAG. Up to 3,597 QPS; optimized for real-time vs cloud alternatives.
tinyvector
Pure Rust embedding database as lightweight Axum server for low-latency on-device vector search scaling to 100M+ vectors in memory. High accuracy/speed for edge RAG; simpler than Qdrant edge.
DuckDB
Embeddable SQL OLAP engine with VSS extension for low-latency HNSW vector search on local files, ideal for edge AI prototyping and analytics. SQL-first approach for on-device vector ops vs cloud vector DBs like Qdrant.
arroy
Rust library for low-latency on-device vector similarity search using random projection trees and LMDB storage, enabling efficient ANN on edge devices. Supports concurrent multi-process access for real-time AI apps. Ideal for IoT and embedded systems vs cloud alternatives like Qdrant.
rvf-launch supports edge vector workloads via microVMs.
Free open-source.