Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer

Authors: Hsueh-Ti Derek Liu, Michael Tao, Chun-Liang Li, Derek Nowrouzezahrai, Alec Jacobson

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform perturbations in the underlying image formation parameter space using a novel physically-based differentiable renderer. Our differentiable renderer achieves state-of-the-art performance in speed and scalability (Section 3) and is fast enough for rendered adversarial data augmentation (Section 5): training augmented with adversarial images generated with a renderer. ... Ours is among the first steps towards the deployment of rendered adversarial data augmentation in real-world applications: we train a classifier with computer-generated adversarial images, evaluating the performance of the training against real photographs (i.e., captured using cameras; Section 5).
Researcher Affiliation Academia Hsueh-Ti Derek Liu University of Toronto hsuehtil@cs.toronto.edu Michael Tao University of Toronto mtao@dgp.toronto.edu Chun-Liang Li Carnegie Mellon University chunlial@cs.cmu.edu Derek Nowrouzezahrai Mc Gill University derek@cim.mcgill.ca Alec Jacobson University of Toronto jacobson@cs.toronto.edu
Pseudocode No The paper includes mathematical derivations and descriptions of methods but does not contain a dedicated pseudocode block or algorithm listing.
Open Source Code No The paper does not contain an explicit statement about the release of open-source code for the methodology or provide a link to a code repository.
Open Datasets Yes We train the Wide Res Net (16 layers, 4 wide factor) (Zagoruyko & Komodakis, 2016) on CIFAR-100 (Krizhevsky & Hinton, 2009)...
Dataset Splits Yes We train the model for 150 epochs and use the one with best accuracy on the validation set.
Hardware Specification Yes We evaluate our CPU PYTHON implementation and the OPENGL rendering, on an Intel Xeon 3.5GHz CPU with 64GB of RAM and an NVIDIA Ge Force GTX 1080.
Software Dependencies Yes We implement the training using PYTORCH (Paszke et al., 2017), with the SGD optimizer and set the Nesterov momentum 0.9, weight decay 5e-4.
Experiment Setup Yes When training the 16-layers Wide Res Net (Zagoruyko & Komodakis, 2016) with wide-factor 4, we use batch size 128, learning rate 0.125, dropout rate 0.3, and the standard cross entropy loss. We implement the training using PYTORCH (Paszke et al., 2017), with the SGD optimizer and set the Nesterov momentum 0.9, weight decay 5e-4. We train the model for 150 epochs and use the one with best accuracy on the validation set.