Exponential Separation between Two Learning Models and Adversarial Robustness

Authors: Grzegorz Gluch, Ruediger Urbanke

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We prove an exponential separation for the sample/query complexity between the standard PAC-learning model and a version of the Equivalence-Query-learning model. ... For the convenience of the reader we replicate experiments from Stutz et al. [2019], adapting the representation to our needs. Using the code provided by the authors, we reproduced some of their findings on F-MNIST dataset [Xiao et al., 2017]. Results are presented in Figure 1
Researcher Affiliation Academia Grzegorz Głuch EPFL Lausanne, Switzerland grzegorz.gluch@epfl.ch Ruediger Urbanke EPFL Lausanne, Switzerland ruediger.urbanke@epfl.ch
Pseudocode Yes Algorithm 1 EQlearner
Open Source Code No The paper states, 'Using the code provided by the authors, we reproduced some of their findings on F-MNIST dataset'. This indicates they used another paper's code, not their own. The checklist also states under 4(c) 'Did you include any new assets either in the supplemental material or as a URL? [No] We only included an appendix with proofs and explanation of experimental setup.'
Open Datasets Yes Using the code provided by the authors, we reproduced some of their findings on F-MNIST dataset [Xiao et al., 2017].
Dataset Splits No The paper mentions general training details in Appendix B, such as 'We used a standard training setup, using Adam [Kingma and Ba, 2014] optimizer with learning rate 1e-3, batch size of 128 for 50 epochs.' However, it does not specify any train/validation/test dataset splits or their percentages/sizes.
Hardware Specification Yes All experiments were performed on a single GeForce GTX 1080 Ti GPU.
Software Dependencies No The paper mentions using 'Adam [Kingma and Ba, 2014] optimizer' but does not specify version numbers for any software, libraries, or programming languages used for the experiments.
Experiment Setup Yes We used a standard training setup, using Adam [Kingma and Ba, 2014] optimizer with learning rate 1e-3, batch size of 128 for 50 epochs.