Affiliation:
1. Nanyang Technological University, Singapore, Singapore
Abstract
Searching for approximate nearest neighbors (ANN) in the high-dimensional Euclidean space is a pivotal problem. Recently, with the help of fast SIMD-based implementations, Product Quantization (PQ) and its variants can often efficiently and accurately estimate the distances between the vectors and have achieved great success in the in-memory ANN search. Despite their empirical success, we note that these methods do not have a theoretical error bound and are observed to fail disastrously on some real-world datasets. Motivated by this, we propose a new randomized quantization method named RaBitQ, which quantizes D-dimensional vectors into D-bit strings. RaBitQ guarantees a sharp theoretical error bound and provides good empirical accuracy at the same time. In addition, we introduce efficient implementations of RaBitQ, supporting to estimate the distances with bitwise operations or SIMD-based operations. Extensive experiments on real-world datasets confirm that (1) our method outperforms PQ and its variants in terms of accuracy-efficiency trade-off by a clear margin and (2) its empirical performance is well-aligned with our theoretical analysis.
Funder
Ministry of Education, Singapore
Publisher
Association for Computing Machinery (ACM)
Reference96 articles.
1. Similarity Search in the Blink of an Eye with Compressed Indices
2. The Fast Johnson–Lindenstrauss Transform and Approximate Nearest Neighbors
3. Noga Alon and Bo'az Klartag. 2017. Optimal Compression of Approximate Inner Products and Dimension Reduction. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS). 639--650. https://doi.org/10.1109/ FOCS.2017.65
4. Cache locality is not enough
5. Accelerated Nearest Neighbor Search with Quick ADC