You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's becoming more common that users quantize their vectors using scalar and binary quantization techniques before they hand the vectors off to be indexed. But specifically binary vectors require a version of the hamming distance that uses an XNOR in place of the multiplication.
The distance should support all current input data types for other distances (fp32, fp16, and uint8) as the binary bits can be packed into any of those and needs to be vectorized during the actual load. We will also need to support this for all of our index algorithms, but we should start with CAGRA, since that's the most popular and requested index.
The text was updated successfully, but these errors were encountered:
It's becoming more common that users quantize their vectors using scalar and binary quantization techniques before they hand the vectors off to be indexed. But specifically binary vectors require a version of the hamming distance that uses an XNOR in place of the multiplication.
The distance should support all current input data types for other distances (fp32, fp16, and uint8) as the binary bits can be packed into any of those and needs to be vectorized during the actual load. We will also need to support this for all of our index algorithms, but we should start with CAGRA, since that's the most popular and requested index.
The text was updated successfully, but these errors were encountered: