Provably Robust Metric Learning
Authors: Lu Wang, Xuanqing Liu, Jinfeng Yi, Yuan Jiang, Cho-Jui Hsieh
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that the proposed metric learning algorithm improves both certiļ¬ed robust errors and empirical robust errors (errors under adversarial attacks). |
| Researcher Affiliation | Collaboration | 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China 2JD.com, Beijing 100101, China 3Department of Computer Science, University of California, Los Angeles, CA 90095 |
| Pseudocode | Yes | Algorithm 1: Adversarially robust metric learning (ARML) |
| Open Source Code | No | The paper does not contain a statement indicating the release of open-source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We use six public datasets on which metric learning methods perform favorably in terms of clean errors, including four small or medium-sized datasets [5]: Splice, Pendigits, Satimage and USPS, and two image datasets MNIST [31] and Fashion-MNIST [51] |
| Dataset Splits | No | The paper mentions 'training data S' and 'test set' but does not specify exact train/validation/test percentages, sample counts, or explicit splitting methodology in the main text. |
| Hardware Specification | Yes | all of the methods are run on CPU (Xeon(R) E5-2620 v4 @2.10GHz). In fact, ARML is highly parallelizable and our implementation also supports GPU with the Py Torch library [41]: when running on GPU (one Nvidia TITAN Xp), the average runtime of ARML is only 10.6s. |
| Software Dependencies | No | The paper mentions 'Py Torch library' and 'metric-learn' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | No | The paper states, 'For the proposed method, we use the same hyperparameters for all the datasets (see Appendix D for the dataset statistics, more details of the experimental setting, and hyperparameter sensitivity analysis)', indicating that specific setup details are deferred to an appendix and not present in the main text. |