Generalization Bounds for Adversarial Metric Learning
Authors: Wen Wen, Han Li, Hong Chen, Rui Wu, Lingjuan Wu, Liangxuan Zhu
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental evaluation on real-world datasets validates our theoretical findings. |
| Researcher Affiliation | Collaboration | 1College of Informatics, Huazhong Agricultural University, Wuhan 430070, China 4Horizon Robotics, Haidian District, Bei Jing 100190, China |
| Pseudocode | No | The paper describes its methods but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statements about releasing source code or providing links to a code repository for its methodology. |
| Open Datasets | Yes | We adopt the following real-world datasets for experiments: the Wine1, Spambase2, MNIST3 and CIFAR104 datasets. 1https://archive.ics.uci.edu/ml/datasets/wine/ 2https://archive.ics.uci.edu/ml/datasets/spambase/ 3http://yann.lecun.com/exdb/mnist/ 4https://www.kaggle.com/competitions/cifar-10/data |
| Dataset Splits | Yes | We randomly split the new dataset into training, validation and test sets with a ratio of 6 : 2 : 2, where the validation set is used for early stopping to prevent overfitting of model. |
| Hardware Specification | No | The paper discusses training models but does not specify any hardware details such as GPU or CPU models used for the experiments. |
| Software Dependencies | No | The paper mentions the use of 'Adam optimizer' and 'ReLU activation' but does not specify any software libraries or their version numbers, such as Python versions, deep learning frameworks (e.g., TensorFlow, PyTorch), or specific library versions. |
| Experiment Setup | Yes | The learning rates of the linear model and the nonlinear model are set as 1e 2 and 1e 3, respectively. We apply ℓ PGD attack [Madry et al., 2018] adversarial training to minimize the following objective function... During the training and test phases, the adversarial samples are generated by PGD algorithm with step size ε/5, where ε is the maximum magnitude of the allowed perturbations that varies in {0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4}. |