Curvilinear Distance Metric Learning

Authors: Shuo Chen, Lei Luo, Jian Yang, Chen Gong, Jun Li, Heng Huang

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on the synthetic and real-world datasets validate the superiority of our method over the state-of-the-art metric learning models.
Researcher Affiliation Collaboration S. Chen, J. Yang, and C. Gong are with the PCA Lab, Key Lab of Intelligent Perception and Systems for High Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China (E-mail: {shuochen, csjyang, chen.gong}@njust.edu.cn). L. Luo and H. Huang are with the Electrical and Computer Engineering, University of Pittsburgh, and also with JD Finance America Corporation, USA (E-mail: lel94@pitt.edu, henghuanghh@gmail.com). J. Li is with the Institute for Medical Engineering & Science, Massachusetts Institute of Technology, Cambridge, MA, USA (E-mail: junli@mit.edu).
Pseudocode Yes Algorithm 1 Solving Eq. (11) via Stochastic Gradient Descent.
Open Source Code No The paper does not provide any concrete access information (specific repository link, explicit code release statement, or code in supplementary materials) for the source code of the methodology described.
Open Datasets Yes The datasets are from the well-known UCI machine learning repository [1] including MNIST, Autompg, Sonar, Australia, Hayes-r, Glass, Segment, Balance, Isolet, and Letters.
Dataset Splits No The paper describes training and test splits (e.g., '60% of all data is randomly selected for training, and the rest is used for test.' or '80% of examples are randomly selected as the training examples, and the rest are used for testing.'), but it does not explicitly specify a validation dataset split.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions certain methods and components used (e.g., 'k-NN classifier', 'DSIFT', 'Siamese-CNN'), but it does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9, scikit-learn 0.24).
Experiment Setup Yes In our experiments, the parameters λ and c are fixed to 1.2 and 10, respectively. The SGD parameters h and ρ are fixed to 10^3 and 10^-3, respectively.