Learning Recourse on Instance Environment to Enhance Prediction Accuracy

Authors: Lokesh N, Guntakanti Sai Koushik, Abir De, Sunita Sarawagi

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experiment with synthetic and real world datasets to show the efficacy of our proposed approach.
Researcher Affiliation Academia Lokesh Nagalapatti Guntakanti Sai Koushik Abir De Sunita Sarawagi Department of Computer Science and Engineering IIT Bombay
Pseudocode Yes Algorithm 1: GREEDYALGORITHM for training f{theta} Algorithm 2: Train RECOURSENET Algorithm 3: RECOURSENET Inference
Open Source Code Yes Code is included in the Supplementary Material and datasets are provided in an anonymized URL.
Open Datasets Yes Shapenet consists of three dimensional models of many kinds of objects that can be mapped into two dimensional pixel maps under various environments [3]. Speech Commands Dataset. This dataset consists of textual commands that can be converted to speech under different environments {eta} defined by (pitch, speed, noise) sampled from B with |B| = 60. This dataset consists of images of skin captured using smartphone and the task is to classify among seven different skin conditions (|Y| = 7). The dataset is taken from Kaggle 4.
Dataset Splits No The paper mentions training and test sets but does not explicitly specify a validation set or details about data splits for training, validation, and testing.
Hardware Specification No The paper states in the checklist that 'the total amount of compute and the type of resources used' are in the Appendix, but this information is not present in the main paper text provided.
Software Dependencies No The paper mentions "Adam optimizer" and "Resnet18 model" and "Google text to speech library" but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We used Adam optimizer with default learning rate of 10-3 to optimize all our objectives. The architecture for f{theta} is a Resnet18 model trained from scratch. We use budget b = 1000 but to avoid training the model iteratively b times, we select 10 instances into R at line 5 of algorithm 1 per iteration. For g{phi} too we train a Resnet18 model from scratch. We obtain 512-dimensional embedding for {eta} as an average of embeddings of its individual components which are trained end-to-end. We concatenate the embeddings of {eta} with last layer embeddings of x from Resnet18 to obtain the input embedding which is then fed to a 3-layered neural network that predicts the recourse {eta}.