Deep Metric Learning with Graph Consistency

Authors: Binghui Chen, Pengyu Li, Zhaoyi Yan, Biao Wang, Lei Zhang982-990

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper empirically and experimentally demonstrates the effectiveness of our graph regularization idea, achieving competitive results on the popular CUB, CARS, Stanford Online Products and In-Shop datasets.
Researcher Affiliation Collaboration 1 Artificial Intelligence Center, DAMO Academy, Alibaba Group 2 Harbin Institute of Technology 3 The Hong Kong Polytechnic University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Procedures are described in narrative text.
Open Source Code No The paper does not provide concrete access to source code (no specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described.
Open Datasets Yes 1. CARS contains 16,185 car images from 196 classes... 2. CUB includes 11,788 bird images from 200 classes... 3. Stanford Online Products has 11,318 classes... 4. In-Shop contains 3,997 classes... These datasets are widely recognized benchmarks in Deep Metric Learning, indicating public availability.
Dataset Splits Yes CARS: We split the first 98 classes for training (8,054 images) and the rest 98 classes for testing (8,131 images). CUB: We use the first 100 classes for training (5,864 images) and the rest 100 classes for testing (5,924 images). Stanford Online Products: 11,318 classes for training (59,551 images) and the other 11,316 classes for testing (60,502 images). In-Shop: 3,997 classes for training(25,882 images) and the resting 3,985 classes for testing(28,760 images). The test set is partitioned into the query set of 3,985 classes(14,218 images) and the retrieval database set of 3,985 classes(12,612 images). The paper provides detailed train/test splits for all datasets, sufficient for reproducibility.
Hardware Specification No The paper mentions 'pretrained Inception V1' and 'pretrained Inception BN model' as backbone CNNs, but does not specify any hardware details like GPU/CPU models, processors, or memory.
Software Dependencies No Our work is implemented by caffe. The paper mentions 'caffe' but does not provide a specific version number, nor does it list other software dependencies with version numbers.
Experiment Setup Yes For training, the optimizer is Adam (Kingma and Ba 2014) with learning rate 1e 5 and weight decay 2e 4. The training iterations are 5k (CUB), 10k (CARS), 20k (Stanford Online Products and In-Shop), respectively. The new fc-layer is optimized with 10 times learning rate for fast convergence. Moreover, for fair comparison, we use minibatch of size n = 130 throughout our experiments, which is composed of m = 13 random selected classes with 10 instances each class.