Invariant Risk Minimization Games

Authors: Kartik Ahuja, Karthikeyan Shanmugam, Kush Varshney, Amit Dhurandhar

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experiments Table 1. Colored MNIST: Comparison of methods in terms of train ing, testing accuracy (mean std deviation).
Researcher Affiliation Industry 1IBM Research, Thomas J. Watson Research Center, York town Heights, NY. Correspondence to: Kartik Ahuja <kar tik.ahuja@ibm.com>.
Pseudocode Yes Algorithm 1 Best Response Training
Open Source Code Yes The source-code is available at https://github.com/IBM/IRM-games.
Open Datasets Yes Colored MNIST dataset. In Arjovsky et al. (2019), the comparisons were done on a colored digits MNIST dataset. We create the same dataset for our experiments.
Dataset Splits No There are three environments (two training containing 30,000 points each, one test containing 10,000 points) We add noise to the preliminary label (y = 0 if digit is between 0-4 and y = 1 if the digit is between 5 9) by flipping it with 25 percent probability to construct the final labels.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup No The paper states: "The details on architectures, hyperparameters, and optimizers used are in the supplement." These details are not provided in the main text of the paper.