



Training technique in CSRv2 that enhances representational quality of sparse embeddings by using labeled data to guide the learning process.
Supervised Contrastive Objectives are training techniques used in CSRv2 to improve the quality of ultra-sparse embeddings by leveraging labeled training data.
Instead of relying solely on unsupervised contrastive learning, supervised contrastive objectives:
When combined with progressive k-annealing, supervised contrastive objectives enable CSRv2 to achieve 14% accuracy gains at k=2 compared to prior methods.
This technique is part of CSRv2's breakthrough in making ultra-sparse embeddings practical, achieving up to 300x improvements in compute and memory efficiency.
Research technique, open-source.
Loading more......