Stable Learning via Sample Reweighting

Authors: Zheyan Shen, Peng Cui, Tong Zhang, Kun Kunag5692-5699

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical studies on both simulation and real datasets demonstrate the effectiveness of our method in terms of more stable performance across different distributed data.
Researcher Affiliation Academia Zheyan Shen,1 Peng Cui,1 Tong Zhang,2 Kun Kuang1,3 1Tsinghua University, 2The Hong Kong University of Science and Technology 3Zhejiang University
Pseudocode Yes Algorithm 1 Sample Reweighted Decorrelation Operator (SRDO)
Open Source Code Yes Due to the limited space, we just show a few settings, complete experiments and implementations could be found at 1. 1https://github.com/Silver-Shen/Stable Linear Model Learning
Open Datasets Yes In this experiment, we use a real world regression dataset (Kaggle) of house sales prices from King County, USA, which includes the houses sold between May 2014 and May 2015 .
Dataset Splits No The paper mentions 'tune all the parameters by cross validation' which implies a validation process. However, it does not specify a fixed train/validation/test split with percentages or counts, or a detailed methodology for how cross-validation was specifically applied for a distinct validation set across all experiments. For example, in the real-world regression experiment, it states 'We train all the methods on the first period where built year [1900, 1919] with cross validation, and test them on all the six periods respectively', but does not give a specific validation split or how cross-validation was used for hyperparameter tuning beyond this general statement.
Hardware Specification No The paper does not provide any specific details regarding the hardware used to run the experiments, such as CPU or GPU models, or memory specifications.
Software Dependencies No The paper mentions various software and methods (e.g., OLS, Lasso, Elastic Net, ULasso, IILasso, logistic regression) but does not provide specific version numbers for any of these software dependencies or libraries.
Experiment Setup No The paper states 'The above methods have several hyper-parameters and we tune all the parameters by cross validation,' but it does not specify any concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or other detailed training configurations used in the experiments.