Abstract
We propose a novel distance-based regularization method for deep metric learning called Multi-level Distance Regularization (MDR).
MDR explicitly disturbs a learning procedure by regularizing pairwise distances between embedding vectors into multiple levels that represents a degree of similarity between a pair.
In the training stage, the model is trained with both MDR and an existing loss function of deep metric learning, simultaneously; the two losses interfere with the objective of each other, and it makes the learning process difficult.
Moreover, MDR prevents some examples from being ignored or overly influenced in the learning process.
These allow the parameters of the embedding network to be settle on a local optima with better generalization.
Without bells and whistles, MDR with simple Triplet loss achieves the-state-of-the-art performance in various benchmark datasets: CUB-200-2011, Cars-196, Stanford Online Products, and In-Shop Clothes Retrieval.
We extensively perform ablation studies on its behaviors to show the effectiveness of MDR.
By easily adopting our MDR, the previous approaches can be improved in performance and generalization ability.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Deep Metric Learning with Chance Constraints;2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV);2024-01-03
2. Leveraging Two-Scale Features to Enhance Fine-Grained Object Retrieval;Communications in Computer and Information Science;2023-11-13
3. Generalized Sum Pooling for Metric Learning;2023 IEEE/CVF International Conference on Computer Vision (ICCV);2023-10-01
4. Boundary-restricted metric learning;Machine Learning;2023-09-20
5. Multi-level distance embedding learning for robust acoustic scene classification with unseen devices;Pattern Analysis and Applications;2023-06-20