Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning

Authors: Geonmo Gu, Byungsoo Ko, Han-Gyu Kim1460-1468

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four famous benchmarks in image retrieval tasks demonstrate that Proxy Synthesis significantly boosts the performance of proxy-based losses and achieves state-of-the-art performance.
Researcher Affiliation Industry Geonmo Gu 1, Byungsoo Ko 1, Han-Gyu Kim 2 1 NAVER/LINE Vision, 2 NAVER Clova Speech korgm403@gmail.com, kobiso62@gmail.com, hangyu.kim@navercorp.com
Pseudocode No The paper describes the method in prose but does not provide structured pseudocode or algorithm blocks in the main text.
Open Source Code Yes Our implementation is available at github.com/navervision/proxy-synthesis.
Open Datasets Yes We evaluate the proposed method with respect to four benchmarks in metric learning: CUB-200-2011 (CUB200) (Wah et al. 2011), CARS196 (Krause et al. 2013), Standford Online Products (SOP) (Oh Song et al. 2016), and In-Shop Clothes (In-Shop) (Liu et al. 2016).
Dataset Splits Yes We include an evaluation procedure designed from work A metric learning reality check (Musgrave, Belongie, and Lim 2020) and call it MLRC evaluation, which contains 4-fold cross-validation, ensemble evaluation, and usage of fair metrics (P@1, RP, and MAP@R).
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU, CPU models) used for running the experiments.
Software Dependencies No The paper mentions using an 'Inception network' and provides a GitHub link, but does not specify particular software dependencies with version numbers (e.g., PyTorch 1.9, Python 3.8).
Experiment Setup Yes Experiments are performed on an Inception network with batch normalization (Ioffe and Szegedy 2015) with a 512 embedding dimension. For the hyper-parameters of Proxy Synthesis, α and µ are set to 0.4 and 1.0, respectively.