• Home
  • Categories
  • Tags
  • Pricing
  • Submit
    Decorative pattern
    1. Home
    2. Research Papers & Surveys
    3. Faster Maximum Inner Product Search in High Dimensions

    Faster Maximum Inner Product Search in High Dimensions

    A 2022 research paper presenting algorithms for faster MIPS (Maximum Inner Product Search) in high-dimensional spaces. MIPS is crucial for recommendation systems, neural networks, and various machine learning applications.

    🌐Visit Website

    About this tool

    Overview

    Published as arXiv:2212.07551, this paper addresses Maximum Inner Product Search (MIPS) in high-dimensional spaces—a problem closely related to but distinct from nearest neighbor search.

    What is MIPS?

    Maximum Inner Product Search finds vectors with the highest inner product (dot product) with a query vector:

    • Different from nearest neighbor (uses distance, not inner product)
    • Important for recommendation systems
    • Fundamental to neural network operations
    • Common in machine learning applications

    Why MIPS Matters

    Recommendation Systems

    User-item scores often computed as inner products of embedding vectors

    Neural Networks

    Attention mechanisms rely on inner product computations

    Information Retrieval

    Some relevance models use inner product for scoring

    Matrix Factorization

    Finding top components in factorized representations

    High-Dimensional Challenge

    As dimensionality increases:

    • Naive algorithms become computationally expensive
    • Traditional ANN techniques don't directly apply
    • Inner product structure differs from distance-based search
    • Need specialized algorithms

    Key Contributions

    The paper likely presents:

    • Faster algorithms specifically designed for MIPS
    • Theoretical analysis of complexity
    • Practical heuristics for high dimensions
    • Experimental validation on real-world datasets

    Relationship to ANN

    MIPS can be transformed to nearest neighbor search, but:

    • Transformation increases dimensionality
    • Direct MIPS algorithms can be more efficient
    • Different index structures may be optimal

    Use Cases

    • Large-scale recommendation engines (millions of items)
    • Neural network inference optimization
    • Collaborative filtering systems
    • Embedding-based retrieval
    • Top-K selection in high-dimensional spaces

    Practical Impact

    Many real-world systems use MIPS:

    • Product recommendations in e-commerce
    • Content recommendations in streaming services
    • Ad targeting systems
    • Search ranking

    Faster MIPS algorithms directly improve user experience and reduce infrastructure costs.

    Availability

    ArXiv preprint arXiv:2212.07551 with algorithmic details and performance analysis.

    Surveys

    Loading more......

    Information

    Websitearxiv.org
    PublishedMar 20, 2026

    Categories

    1 Item
    Research Papers & Surveys

    Tags

    4 Items
    #Mips#Algorithms#High Dimensional#Optimization

    Similar Products

    6 result(s)
    Maximum Inner Product is Query-Scaled Nearest Neighbor

    A theoretical paper establishing the relationship between Maximum Inner Product Search and query-scaled nearest neighbor search. This connection enables applying NN techniques to MIPS problems with theoretical guarantees.

    Monte Carlo Tree Search for Vector Indexing

    Research on using Monte Carlo Tree Search algorithms for optimizing vector index construction and search strategies. Explores adaptive decision-making during graph building and query routing.

    OrchANN

    A unified I/O orchestration framework for skewed out-of-core vector search that addresses the challenge of billion-scale ANN search when the dataset exceeds available memory. OrchANN optimizes I/O operations for graph-based indexes stored on disk.

    Pyramid Product Quantization

    An advanced vector compression technique for approximate nearest neighbor search that improves upon traditional product quantization by using a hierarchical pyramid structure. Published in 2026, it achieves better compression ratios while maintaining search accuracy.

    Locality-Sensitive Hashing

    Locality-Sensitive Hashing (LSH) is an algorithmic technique for approximate nearest neighbor search in high-dimensional vector spaces, commonly used in vector databases to speed up similarity search while reducing memory footprint.

    Breaking the Storage-Compute Bottleneck in Billion-Scale ANNS

    A 2025 research paper presenting a GPU-driven asynchronous I/O framework for billion-scale approximate nearest neighbor search. The system addresses the fundamental bottleneck of data movement between storage and compute in large-scale vector search.

    Decorative pattern
    Built with
    Ever Works
    Ever Works

    Connect with us

    Stay Updated

    Get the latest updates and exclusive content delivered to your inbox.

    Product

    • Categories
    • Tags
    • Pricing
    • Help

    Clients

    • Sign In
    • Register
    • Forgot password?

    Company

    • About Us
    • Admin
    • Sitemap

    Resources

    • Blog
    • Submit
    • API Documentation
    All product names, logos, and brands are the property of their respective owners. All company, product, and service names used in this repository, related repositories, and associated websites are for identification purposes only. The use of these names, logos, and brands does not imply endorsement, affiliation, or sponsorship. This directory may include content generated by artificial intelligence.
    Copyright © 2025 Awesome Vector Databases. All rights reserved.·Terms of Service·Privacy Policy·Cookies