Efficient end-to-end learning for quantizable representations

Authors: Yeonwoo Jeong, Hyun Oh Song

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results on Cifar-100 and on Image Net datasets show the state of the art search accuracy in precision@k and NMI metrics while providing up to 98 and 478 search speedup respectively over exhaustive linear search.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, Seoul National University, Seoul, Korea.
Pseudocode Yes Algorithm 1 Learning algorithm
Open Source Code Yes The source code is available at https://github.com/maestrojeong/Deep-Hash Table-ICML18.
Open Datasets Yes We report our results on Cifar-100 (Krizhevsky et al., 2009) and Image Net (Russakovsky et al., 2015) datasets
Dataset Splits Yes Cifar-100 (Krizhevsky et al., 2009) dataset has 100 classes. Each class has 500 images for train and 100 images for test. ... Image Net ILSVRC-2012 (Russakovsky et al., 2015) dataset has 1, 000 classes and comes with train (1, 281, 167 images) and val set (50, 000 images). We use the first nine splits of train set to train our model, the last split of train set for validation, and use validation dataset to test the query performance.
Hardware Specification Yes Each data point is averaged over 20 runs on machines with Intel Xeon E5-2650 CPU.
Software Dependencies No The paper mentions "Tensorflow (Abadi et al., 2015)" and "OR-Tools (Google optimization tools for combinatorial optimization problems) (OR-tools, 2018)" but does not provide specific version numbers for these software components.
Experiment Setup Yes The batch size is set to 128. The metric learning base model is trained for 175k iterations, and learning rate decays to 0.1 of previous learning rate after 100k iterations. We finetune the base model for 70k iterations and decayed the learning rate to 0.1 of previous learning rate after 40k iterations.