Robust Similarity Learning with Difference Alignment Regularization

Authors: Shuo Chen, Gang Niu, Chen Gong, Okan Koc, Jian Yang, Masashi Sugiyama

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on multi-domain data demonstrate the superiority of DASL over existing approaches in both supervised metric learning and unsupervised contrastive learning tasks. In this section, we show experimental results on real-world datasets to validate the effectiveness of our DASL in both the supervised metric learning and unsupervised contrastive learning tasks. We first provide ablation studies and visualization results. Then, we compare our method with existing state-of-the-art methods.
Researcher Affiliation Academia Shuo Chen1, Gang Niu1, Chen Gong3, Okan Koc1, Jian Yang3, Masashi Sugiyama1,2 1RIKEN Center for Advanced Intelligence Project, Tokyo, Japan 2The University of Tokyo, Tokyo, Japan 3Nanjing University of Science and Technology, Nanjing, China
Pseudocode Yes Algorithm 1 Solving Eq. (8) via SGD.
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets Yes We conduct experiments on CAR (Krause et al., 2013) and CUB (Welinder et al., 2010) datasets and record the test accuracy of compared methods... We show the classification accuracy rates of all compared methods on CIFAR-10 (Krizhevsky et al., 2009) and STL-10 datasets (Coates et al., 2011)... We train our method on Image Net-100 and Image Net-1K (Russakovsky et al., 2015)... For the Book Corpus dataset (Kiros et al., 2015)... For the STS dataset (Agirre et al., 2016)...
Dataset Splits Yes We split the dataset into the training, test, and validation sets at the proportion of 8/1/1 and report the mean classification accuracy with standard deviation after 5 runs followed by a linear SVM classifier. The SVM is trained using cross-validation on training folds of data and the model for testing is selected by the best validation performance.
Hardware Specification Yes Both the training and test processes are implemented on Pytorch (Paszke et al., 2019) with Tesla V100 GPUs, where the regularization parameter λ is set to 0.5. Specifically, we use two NVIDIA Tesla V100 GPUs to train our method based on Sim CLR and Sw AV with 100 epochs, respectively.
Software Dependencies No The paper mentions 'Pytorch (Paszke et al., 2019)' but does not specify a version number for PyTorch or any other software dependencies, which is required for reproducibility.
Experiment Setup Yes Both the training and test processes are implemented on Pytorch (Paszke et al., 2019) with Tesla V100 GPUs, where the regularization parameter λ is set to 0.5. The dimensionality h and the parameter γ in Eq. (1) are set to 512 and 0.2, respectively. We conduct experiments on CAR (Krause et al., 2013) and CUB (Welinder et al., 2010) datasets and record the test accuracy of compared methods (with 500 epochs, learning rate = 10^-3, and batch size = 512 (Zhou et al., 2021; 2022a)).