Monge blunts Bayes: Hardness Results for Adversarial Training

Authors: Zac Cranko, Aditya Menon, Richard Nock, Cheng Soon Ong, Zhan Shi, Christian Walder

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have performed toy experiments to demonstrate our new setting. Our objective is not to investigate the competition with respect to the wealth of results that have been recently published in the field, but rather to touch upon the interest that such a novel setting might have for further experimental investigations. Compared to the state-of-the-art, ours is a clear two-stage setting where we first compute the adversaries assuming relevant knowledge of the learner (in our case, we rely on Theorem 12 and therefore assume that the adversary knows at least the cost c, see below), and then we learn based on an adversarially transformed set of examples. We have performed two experiments: a 1D experiment involving a particular Mixup adversary and a USPS experiment involving a closer proxy of the optimal transport compression that we call Monge adversary. Table 2: log loss USPS results.
Researcher Affiliation Collaboration Zac Cranko 1 2 Aditya Krishna Menon 3 Richard Nock 2 1 4 Cheng Soon Ong 2 1 Zhan Shi 5 Christian Walder 2 1 [...] 1The Australian National University (Australia) 2Data61 (Australia) 3Google Research (USA) 4The University of Sydney (Australia) 5University of Illinois at Chicago (USA).
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code at: https://gitlab.com/machlearn/monge_image_example.
Open Datasets Yes We have picked 100 examples of each of the "1" and "3" classes of the 8 8 pixel greyscale USPS handwritten digit dataset.
Dataset Splits No The paper mentions using the USPS dataset and discussing 'training / test schemes' but does not provide specific details on the train, validation, or test splits, such as percentages or sample counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup No The paper mentions using logistic regression and describes adversary strengths but does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, optimizer settings) for the models trained.