Diffusion Visual Counterfactual Explanations

Authors: Maximilian Augustin, Valentyn Boreiko, Francesco Croce, Matthias Hein

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the quality of the DVCE. We compare DVCE to existing works in Sec. 4.1. In Sec. 4.2, we compare DVCEs for various state-of-the-art Image Net models and show how DVCEs can be used to interpret differences between classifiers.
Researcher Affiliation Academia Maximilian Augustin Valentyn Boreiko* Francesco Croce Matthias Hein University of Tübingen
Pseudocode No The paper references "Algorithm 1 of [14]" but does not contain its own pseudocode or algorithm block.
Open Source Code Yes Code is available under https://github.com/valentyn1boreiko/DVCEs.
Open Datasets Yes Imange Net and Image Net21k are public datasets.
Dataset Splits No The paper does not explicitly provide specific train/validation/test dataset split percentages or sample counts. The checklist states "Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A] We didn t train models."
Hardware Specification No The main paper text does not explicitly provide specific details about the hardware (e.g., GPU/CPU models) used for running the experiments. It refers to Appendix D for this information, which is not provided in the given text.
Software Dependencies No The paper mentions software such as "UNet", "StyleGAN2", "Swin-TF", "ConvNeXt", "EfficientNet", but it does not specify version numbers for these or other software dependencies.
Experiment Setup Yes In all our experiments we use Cc = 0.1, and Cd = 0.15 unless we show ablations for one of the parameters. The angle for the cone projection is fixed to 30 . In our experiments, we set T = 200.