Learning Optimal Features via Partial Invariance
Authors: Moulik Choraria, Ibtihal Ferwana, Ankur Mani, Lav R. Varshney
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Several experiments, conducted both in linear settings as well as with deep neural networks on tasks over both language and image data, allow us to verify our conclusions. |
| Researcher Affiliation | Academia | 1University of Illinois at Urbana-Champaign, 2 University of Minnesota Twin Cities {moulikc2, iferwna2, varshney}@illinois.edu, amani@umn.edu |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | The code is available at https://github.com/IbtihalFerwana/pirm and other implementation details are deferred to Appendix. |
| Open Datasets | Yes | House Prices Dataset: https://www.kaggle.com/c/houseprices-advanced-regression-techniques |
| Dataset Splits | No | In all our experiments we employ the train-domain validation strategy(Gulrajani and Lopez Paz 2020) for hyper-parameter tuning. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or specific computing environments) used for running experiments were provided. |
| Software Dependencies | No | The paper mentions BERT and Adam optimizer but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | No | While it mentions a fixed weight and optimizer for linear regression, it lacks specific hyperparameter values (like learning rate, batch size, number of epochs) for the deep neural networks and general experimental settings in the main text. It defers "other implementation details" to the Appendix. |