Asymmetry Learning for Counterfactually-invariant Classification in OOD Tasks
Authors: S Chandra Mouli, Bruno Ribeiro
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we show how to learn counterfactually-invariant representations with asymmetry learning in two simulated physics tasks and six image classification tasks. |
| Researcher Affiliation | Academia | S Chandra Mouli Department of Computer Science Purdue University chandr@purdue.edu Bruno Ribeiro Department of Computer Science Purdue University ribeiro@cs.purdue.edu |
| Pseudocode | No | No explicit pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not explicitly state that source code for its methodology is released, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | We use the MNIST-t3, 4u (colored) dataset (Mouli & Ribeiro, 2021) that only contains digits 3 and 4, and follow their experimental setup. |
| Dataset Splits | No | The paper mentions training and test data but does not explicitly provide details about specific training/validation/test splits, percentages, or sample counts. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions architectures like VGG but does not list specific software dependencies (e.g., libraries, frameworks) along with their version numbers. |
| Experiment Setup | No | The paper describes model architectures and the scoring criterion but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings. |