Roadblocks for Temporarily Disabling Shortcuts and Learning New Knowledge

Authors: Hongjing Niu, Hanting Li, Feng Zhao, Bin Li

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that the proposed framework significantly improves the training of networks on both synthetic and real-world datasets in terms of both classification accuracy and feature diversity.
Researcher Affiliation Academia Hongjing Niu Hanting Li Feng Zhao Bin Li University of Science and Technology of China, Hefei, China {sasori, ab828658}@mail.ustc.edu.cn, {fzhao956, binli}@ustc.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See Appendix
Open Datasets Yes We use the synthetic dataset CMNIST [15], which adds a second attribute by coloring MNIST [14]. We also use real-world datasets Celeb A [20], and BAR [23] that were validated to have shortcuts to test the practicality of our method. All the datasets we use is public
Dataset Splits Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Appendix
Hardware Specification No Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes Details of datasets and implementation are described in the appendix. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Appendix