Environment Diversification with Multi-head Neural Network for Invariant Learning
Authors: Bo-Wei Huang, Keng-Te Liao, Chang-Sheng Kao, Shou-De Lin
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments We empirically validate the proposed method on biased datasets, Adult-Confounded, CMNIST, Waterbirds and SNLI. |
| Researcher Affiliation | Academia | Bo-Wei Huang Keng-Te Liao Chang-Sheng Kao Shou-De Lin Department of Computer Science and Information Engineering National Taiwan University, Taiwan {r10922007,d05922001,b07902046,sdlin}@csie.ntu.edu.tw |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks clearly labeled as such. |
| Open Source Code | No | The paper does not provide any concrete access to source code, such as a specific repository link or an explicit code release statement, for the methodology described. |
| Open Datasets | Yes | We empirically validate the proposed method on biased datasets, Adult-Confounded, CMNIST, Waterbirds and SNLI. ... We take UCI Adult [16] ... We report our evaluation on a noisy digit recognition dataset, CMNIST. Following [1], we first assign Y = 1 to those whose digits are smaller than 5 and Y = 0 to the others. ... In Waterbirds [29], each bird photograph, from CUB dataset [31], is combined with one background image, from Places dataset [33]. ... The target of SNLI [3] is to predict the relation between two given sentences, premise and hypothesis. |
| Dataset Splits | Yes | For hyper-parameter tuning, we split 10% of training data to construct an in-distribution validation set. ... To split a validation set whose overall distribution is i.i.d. to the training set, we merge original training and validation data 3 and split 10% for hyper-parameter tuning. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions software like Resnet, Distil BERT, and other models/frameworks, but it does not specify version numbers for any key software components or libraries required to replicate the experiment. |
| Experiment Setup | Yes | For hyper-parameter tuning, we split 10% of training data to construct an in-distribution validation set. ... For all competitors, MLP is taken as the base model and full-batch training is implemented. ... MLP with one hidden layer of 96 neurons is considered. ... MLP with two hidden layers of 390 neurons... Resnet-34 [14] is chosen for mini-batch fine-tuning. ... Distil BERT [30] is taken as the pre-trained model for further mini-batch fine-tuning. |