Symmetrical Synthesis for Deep Metric Learning

Authors: Geonmo Gu, Byungsoo Ko10853-10860

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we report experimental results from the proposed symmetrical synthesis on both image clustering and retrieval tasks. To evaluate quantitative performance, we use the standard F1 and NMI metrics (Manning, Raghavan, and Sch utze 2010) for the image clustering task, and Recall@K score for the image retrieval task.
Researcher Affiliation Industry Clova Vision, NAVER Corp. {korgm403, kobiso62}@gmail.com
Pseudocode No The paper describes the proposed method in text and using mathematical formulations and figures, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We evaluate our proposed method on the widely used three benchmarks by following the conventional protocol of train and test splits used by (Zheng et al. 2019; Oh Song et al. 2016). (1) CUB-200-2011 (CUB200) (Wah et al. 2011)... (2) CARS196 (Krause et al. 2013)... (3) Standard Online Products (SOP) (Oh Song et al. 2016)...
Dataset Splits No The paper specifies train and test splits for each dataset, for example, for CUB-200-2011: 'first 5,864 images of 100 species are used for training and the remaining 5,924 images of 100 species are used for testing.' However, it does not mention an explicit validation split.
Hardware Specification Yes Throughout the experiments, Tensor Flow (Abadi et al. 2016) framework is used on a Tesla P40 GPU with 24GB memory.
Software Dependencies No The paper mentions 'Tensor Flow (Abadi et al. 2016) framework is used' but does not provide a specific version number for TensorFlow or any other software dependencies.
Experiment Setup Yes All images are normalized to 256 256, horizontal flipped and randomly cropped to 227 227. The embedding size is set to 512-dimensional for all feature vectors. We set the learning rate to 10 4 with the Adam optimizer (Kingma and Ba 2014). The batch size of 128 is used for every dataset.