Data-Adaptive Metric Learning with Scale Alignment
Authors: Shuo Chen, Chen Gong, Jian Yang, Ying Tai, Le Hui, Jun Li3347-3354
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Intensive experimental results on various applications including retrieval, classification, and verification clearly demonstrate the superiority of our algorithm to other state-of-the-art metric learning methodologies. |
| Researcher Affiliation | Collaboration | PCA Lab, Key Lab of Intelligent Perception and System for High-Dimensional Information of Ministry of Education Jiangsu Key Lab of Image and Video Undersatanding for Social Security School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China Youtu Lab, Tencent |
| Pseudocode | Yes | Algorithm 1 Solving Eq. (6) via ISTA. Algorithm 2 Distance Computation for New Test Data Pair. |
| Open Source Code | No | No explicit statement providing access to source code (e.g., a repository link, or a statement that code is released) was found. |
| Open Datasets | Yes | In our experiments, we use the cropped Pub Fig face image dataset (Nair and Hinton 2010) and the Outdoor Scene Recognition (OSR) dataset (Parikh and Grauman 2011)... The datasets are from the well-known UCI repository (Asuncion and Newman 2007), which include MNIST, Autompg, Sonar, Australia, Balance, Isolet, and Letters. We use two face datasets and one image matching dataset to evaluate the capabilities of all compared methods on image verification. For the Pub Fig face dataset (as described before)... Similar experiments are performed on the LFW face dataset (Huo, Nie, and Huang 2016)... The image matching dataset MVS (Brown, Hua, and Winder 2011)... |
| Dataset Splits | Yes | In the first experiment, we use the cropped Pub Fig face image dataset... 30 images per person are randomly selected each time as the training data. For the Pub Fig face dataset... the first 80% data pairs are selected for training and the rest are used for testing. In each trial, 80% of examples are randomly selected as the training examples, and the rest are used for testing. |
| Hardware Specification | No | No specific hardware details (e.g., CPU/GPU models, memory) used for running experiments were provided in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9') were provided. It only implies general tools for metric learning and deep learning. |
| Experiment Setup | Yes | For the tuning parameters, c is fixed to 10 while α, β and λ are all tuned by searching the grid {0, 0.2, 0.4, , 2} to get the best performances. We follow ITML and use the squared hinge loss as l(DP(xi, x i), yi) in Eq. (6) for our objective function. The Gaussian kernel function (Bishop 2006) is employed for implementing KDAML. we follow the existing work (Ye et al. 2017) and adopt the k-NN classifier (k = 5) based on the learned metrics to investigate the classification error rates of various methods. |