Designing Counterfactual Generators using Deep Model Inversion

Authors: Jayaraman Thiagarajan, Vivek Sivaraman Narayanaswamy, Deepta Rajan, Jia Liang, Akshay Chaudhari, Andreas Spanias

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical studies using natural image and medical image classifiers to demonstrate the effectiveness of DISC over a variety of baselines and ablations.
Researcher Affiliation Collaboration Jayaraman J. Thiagarajan Lawrence Livermore National Laboratory jjayaram@llnl.gov Vivek Narayanaswamy Arizona State University vnaray29@asu.edu Deepta Rajan IBM Research AI r.deepta@gmail.com Jason Liang Stanford University jialiang@stanford.edu Akshay Chaudhari Stanford University akshaysc@stanford.edu Andreas Spanias Arizona State University spanias@asu.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not include an explicit statement about releasing source code for the described methodology or a link to a code repository.
Open Datasets Yes Datasets. (i) Celeb A Faces [31]: This dataset contains 202,599 images along with a wide-range of attributes... (ii) ISIC 2018 Skin Lesion Dataset [32]: This lesion diagnosis challenge dataset contains a total of 10, 015 dermoscopic lesion images...
Dataset Splits No Note, in all cases, we used a stratified 90 10 data split to train the classifiers.
Hardware Specification No The paper mentions "Lawrence Livermore National Laboratory" but does not provide specific hardware details such as GPU or CPU models used for experiments.
Software Dependencies No The paper mentions ResNet-18 and Adam optimizer but does not provide specific software version numbers for libraries or frameworks used.
Experiment Setup Yes For all experiments, we resized the images to size 96 96 and used the standard Res Net-18 architecture [34] to train the classifier model with the Adam optimizer [35], batch size 128, learning rate 1e 4 and momentum 0.9. For the DEP implementation (Section 3.3), we performed average pooling on feature maps from each of the residual blocks in Res Net-18, and applied a linear layer of 128 units with Re LU activation. The hyper-parameters in (7) were set at β1 = 1.0 and β2 = 0.5. For the case of DUQ, we set both the length scale parameter and the gradient penalty to 0.5.