DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer

Authors: Yuntao Chen, Naiyan Wang, Zhaoxiang Zhang

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our proposed Dark Rank method on various metric learning tasks including pedestrian re-identification, image retrieval and image clustering. The results are quite encouraging. Our method can improve over the baseline method by a large margin.
Researcher Affiliation Collaboration Yuntao Chen,1,5 Naiyan Wang,2 Zhaoxiang Zhang1,3,4,5, 1Research Center for Brain-inspired Intelligence, CASIA 2Tu Simple 3National Laboratory of Pattern Recognition, CASIA 4Center for Excellence in Brain Science and Intelligence Technology, CAS 5University of Chinese Academy of Sciences {chenyuntao2016, zhaoxiang.zhang}@ia.ac.cn winsty@gmail.com
Pseudocode No The paper describes mathematical formulations and processes but does not include structured pseudocode or an algorithm block.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes CUHK03 CUHK03(?) is a large scale data for person reidentification. [...] Market1501 Market1501(?) contains 32668 images of 1501 identities. [...] CUB-200-2011 The Caltech UCSD Birds-200-2011 (CUB200-2011) dataset contains 11788 images of 200 bird species. [...] Both networks are pre-trained on the Image Net LSVRC image classification dataset(?).
Dataset Splits Yes We test different values of β on CUHK03 validation set, and find 3.0 is where the model performance peaks. Figure ?? shows the details.
Hardware Specification Yes The speed is tested on Pascal Titan X with MXNet (?).
Software Dependencies No We implement our method in MXNet (?). (No version number is specified for MXNet.)
Experiment Setup Yes We set the margin in large margin softmax loss to 3, and set the margin to 0.9 in both triplet and verification loss. We set the loss weights of verification, triplet and large margin softmax loss to 5, 0.1, 1, respectively. We choose the stochastic gradient descent method with momentum for optimization. We set the learning rate to 0.01 for the Inception-BN and 5 10 4 for the NIN-BN, and set the weight decay to 10 4. We train the model for 100 epochs, and shrink the learning rate by a factor of 0.1 at 50 and 75 epochs. The batch size is set to 8.