On the Need for Topology-Aware Generative Models for Manifold-Based Defenses

Authors: Uyeong Jang, Susmit Jha, Somesh Jha

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically demonstrate the consequence of the two theorems and explore their implication for the INC defense. Section 5 describes our experimental verification of our theoretical results and investigates their ramifications on a manifold-based defenses called Invert-and-Classify (INC).
Researcher Affiliation Collaboration Uyeong Jang Department of Computer Sciences University of Wisconsin Madison Madison, WI, USA wjang@cs.wisc.edu Susmit Jha Computer Science Laboratory SRI International Menlo Park, CA, USA susmit.jha@sri.com Somesh Jha Department of Computer Sciences University of Wisconsin Madison Madison, WI, USA Xai Pient Princeton, NJ, USA jha@cs.wisc.edu
Pseudocode No The paper includes mathematical equations and descriptions of procedures but does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about making its source code available, nor does it include a link to a code repository for the methodology described.
Open Datasets No The paper describes using 'three toy datasets in R2: two-moons, spirals, and circles' and provides their parameterizations in Table 1 for construction. However, it does not provide concrete access information (e.g., specific links, DOIs, or citations to established public datasets) for these datasets.
Dataset Splits No The paper mentions constructing 'training set' by sampling and perturbing points but does not specify explicit train/validation/test dataset splits (e.g., percentages, sample counts, or predefined splits from benchmark datasets).
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions software like 'Tensorflow Probability' and 'Scikit-learn package' and 'adam optimizer in Tensorflow package' but does not specify their version numbers, which are required for reproducible software dependencies.
Experiment Setup Yes Each model was trained for 30,000 iterations. For each iteration, a batch of 200 random samples was chosen from two-moons and circles dataset, and a batch of 300 random samples was chosen from the spirals dataset. For optimization parameters, we ran 100 iterations of adam optimizer using learning rate 0.01 with random sampling of z. For all datasets, r = 0.2 is used for perturbation size. The baseline SVMs were intentionally ill-trained by using the high kernel coefficient γ = 100.