Distance Metric Learning with Joint Representation Diversification

Authors: Xu Chu, Yang Lin, Yasha Wang, Xiting Wang, Hailong Yu, Xin Gao, Qi Tong

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on three deep DML benchmark datasets demonstrate the effectiveness of the proposed approach.
Researcher Affiliation Collaboration 1School of Electronics Engineering and Computer Science, Peking University, Beijing, China 2Key Laboratory of High Confidence Software Technologies, Ministry of Education, Beijing, China 3National Engineering Research Center of Software Engineering, Peking University, Beijing, China 4Microsoft Research Asia, Beijing, China 5School of Software and Microelectronics, Peking University, Beijing, China
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes Codes are avalable at github.com/Yang Lin122/JRD
Open Datasets Yes Datasets: We conduct our experiments2 on three benchmark datasets: CUB-200-2011 (CUB) (Wah et al., 2011), Cars196 (CARS) (Krause et al., 2013), Stanford Online Products (SOP) (Oh Song et al., 2016).
Dataset Splits Yes We adopt the standard data split protocol. For the CUB dataset that consists of 200 classes with 11,788 images, we use the first 100 classes with 5,864 images for training and the remaining 100 classes with 5,924 images for testing. CARS dataset is composed of 16,185 car images belonging to 196 classes. The first 98 classes are used for training, and the rest classes are used for testing. SOP dataset contains 120,053 product images from 22,634 classes, as the first 11,318 classes are used for training and the rest 11,316 classes for testing... The hyperparameters α, m and batch size M are selected by 10-fold cross-validation...
Hardware Specification Yes Our method is implemented by Pytorch on four Nvidia RTX8000s.
Software Dependencies No The paper mentions 'Pytorch' but does not specify its version or the versions of other software dependencies.
Experiment Setup Yes The s in cosine softmax is fixed as 20. Adam optimizer (Kingma & Ba, 2014) is used for optimization. The number of epochs is set to be 50 (80 for SOP). The initial rates for parameters in the model and the softmax loss are 1e-4 and 1e-2 (1e-1 for SOP), respectively, and are divided by 10 every 20 (40 for SOP) epochs. The hyperparameters α, m and batch size M are selected by 10-fold cross-validation, which is α = 1 for CUB and CARS, α = 0.4 for SOP; m = 0.1 for CUB and SOP, m = 0.05 for CARS; and batch size (100,50,120) for (CUB,CARS,SOP), respectively.