Explanation by Progressive Exaggeration

Authors: Sumedha Singla, Brian Pollack, Junxiang Chen, Kayhan Batmanghelich

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We set up four experiments to evaluate our method. First, we assess if our method satisfies the three criteria of the explainer function introduced in Section 2. We report both qualitative and quantitative results. Second, we apply our method on a medical image diagnosis task. We use external domain knowledge about the disease to perform a quantitative evaluation of the explanation. Third, we train two classifiers on biased and unbiased data and examine the performance of our method in identifying the bias.
Researcher Affiliation Academia Sumedha Singla Department of Computer Science University of Pittsburgh Brian Pollack, Junxiang Chen Department of Biomedical Informatics University of Pittsburgh Kayhan Batmanghelich Department of Biomedical Informatics Department of Computer Science Intelligent Systems Program University of Pittsburgh
Pseudocode No The paper does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions that "The VAE used is available at:https://github.com/Lynn Ho/VAE-Tensorflow." but does not state that the code for their own method is open source or provide a link to it.
Open Datasets Yes Our experiments are conducted on the Celeb A (Liu et al., 2015) and Che Xpert (Irvin et al., 2019) datasets.
Dataset Splits No The paper mentions using a "validation set" in section 4.4 and "train" in table 6, but does not provide specific split percentages, sample counts, or detailed methodology for splitting the datasets.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running its experiments.
Software Dependencies No The paper refers to "VAE-Tensorflow" and discusses model architectures but does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes The overall objective function is min E,G max D λc GANLc GAN(D, G) + λf Lf(D, G) + λrec Lrec(G) + λrec Lcyc(G) where λc GAN, λf, λrec are the hyper-parameters that balance the importance of the loss terms. [...] In the ablation study, we quantify the importance of each of these components by training different models, which differ in one hyper-parameter while rest are equivalent (λc GAN = 1, λf = 1 and λrec = 100).