Learning Deep Embeddings with Histogram Loss

Authors: Evgeniya Ustinova, Victor Lempitsky

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we present the results of embedding learning. We compare our loss to state-of-theart pairwise and triplet losses, which have been reported in recent works to give state-of-the-art performance on these datasets.
Researcher Affiliation Academia Evgeniya Ustinova and Victor Lempitsky Skolkovo Institute of Science and Technology (Skoltech) Moscow, Russia
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code for Caffe [6] is available at: https://github.com/madkn/Histogram Loss.
Open Datasets Yes We have evaluated the above mentioned loss functions on the four datasets : CUB200-2011 [26], CUHK03 [11], Market-1501 [30] and Online Products [21].
Dataset Splits Yes According to the CUHK03 evaluation protocol, 1,360 identities are split into 1,160 identities for training, 100 for validation and 100 for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using Caffe and ADAM for optimization but does not provide specific version numbers for these software components.
Experiment Setup Yes We set the embedding size to 512 for all the experiments with this architecture. For comparison with other methods the batch size was set to 128. For all losses the learning rate is set to 1e 4 for all the experiments except ones on the CUB-200-2011 datasets, for which we have found the learning rate of 1e 5 more effective.