Neural Similarity Learning

Authors: Weiyang Liu, Zhen Liu, James M. Rehg, Le Song

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Most importantly, NSL shows promising performance in visual recognition and few-shot learning, validating the superiority of NSL over the inner product-based convolution counterparts. The entire Section 7, titled 'Applications', details experimental settings, results on CIFAR-10/100, ImageNet-2012, and Mini-ImageNet, and includes multiple tables of performance metrics.
Researcher Affiliation Collaboration 1Georgia Institute of Technology 2Mila, Université de Montréal 3Ant Financial
Pseudocode No The paper describes algorithms verbally and with mathematical formulations but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any specific links to open-source code or explicit statements about its availability.
Open Datasets Yes For CIFAR10 and CIFAR100, we follow the same augmentation settings from [21]. For Imagenet 2012 dataset, we mostly follow the settings in [30]. Experiment on Mini-Image Net. The experimental protocol is the same as [46, 14].
Dataset Splits Yes For CIFAR-10 and CIFAR-100, we start momentum SGD with the learning rate 0.1. The learning rate is divided by 10 at 34K, 54K iterations and the training stops at 64K. For Image Net, the learning rate starts with 0.1, and is divided by 10 at 200K, 375K, 550K iterations (finsihed at 600K). Table 5: Validation error (%) on Image Net-2012.
Hardware Specification No The paper mentions 'Nvidia GPU Grant' in the acknowledgements but does not specify any particular GPU model, CPU, memory, or other hardware components used for the experiments.
Software Dependencies No The paper states 'Batch normalization, Re LU, mini-batch 128, and SGD with momentum 0.9 are used as default in all methods.' but does not provide specific version numbers for any software, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes Batch normalization, Re LU, mini-batch 128, and SGD with momentum 0.9 are used as default in all methods. For CIFAR-10 and CIFAR-100, we start momentum SGD with the learning rate 0.1. The learning rate is divided by 10 at 34K, 54K iterations and the training stops at 64K. For Image Net, the learning rate starts with 0.1, and is divided by 10 at 200K, 375K, 550K iterations (finsihed at 600K).