Semi-Supervised Metric Learning: A Deep Resurrection

Authors: Ujjal Kr Dutta, Mehrtash Harandi, C Chandra Shekhar7279-7287

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we evaluate our proposed method in terms of its effectiveness in clustering and retrieval tasks on a number of benchmark datasets.
Researcher Affiliation Collaboration Ujjal Kr Dutta,1,2 Mehrtash Harandi,3 C Chandra Sekhar2 1 Data Sciences, Myntra, India 2Dept. of Computer Science and Eng., Indian Institute of Technology Madras, India 3Dept. of Electrical and Computer Systems Eng., Monash University, Australia
Pseudocode Yes Algorithm 1 Stochastic extension of the SSDML baselines
Open Source Code No The paper does not explicitly state that the source code for the described methodology is publicly available, nor does it provide a direct link to a code repository.
Open Datasets Yes Datasets: Following recent literature, the benchmark datasets that have been used are as follows: MNIST (Le Cun et al. 1998): It is a benchmark dataset... Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017): It is a similar dataset... CIFAR-10 (Krizhevsky, Hinton et al. 2009): This dataset consists... CUB-200 (Welinder et al. 2010): This dataset consists... Cars-196 (Krause et al. 2013): It consists of images...
Dataset Splits Yes In all experiments, we fix a validation dataset by sampling 15% examples from each class of the training data.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only mentions software tools and network architectures.
Software Dependencies No The paper mentions software tools like 'Mat Conv Net' and 'Manopt' along with citations, but does not specify their exact version numbers. It also does not list specific versions for programming languages or libraries like Python, PyTorch, or TensorFlow.
Experiment Setup Yes For our method, we set γ = 0.99 in the affinity propagation step, k = 10 in the k NN graph, α = 40 in (3), and initial learning rate 10 4. ... For each random subset, we run our method for 10 epochs (with mini-batch size of 100 triplets). In total, we run for a maximum of 50 epochs...