Integrated Latent Heterogeneity and Invariance Learning in Kernel Space

Authors: Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, experiments on both synthetic and real-world data validate the superiority of Ker HRM in terms of good out-of-distribution generalization performance.
Researcher Affiliation Academia 1 Department of Computer Science & Technology, Tsinghua University, Beijing, China 2Department of Computer Science, National University of Singapore, Singapore 3School of Economics and Management, Tsinghua University, Beijing, China
Pseudocode Yes Algorithm 1 Kernelized Heterogeneous Risk Minimization (Ker HRM) Algorithm
Open Source Code Yes Our code is available at https://github.com/LJSthu/Kernelized-HRM.
Open Datasets Yes For the Colored MNIST dataset, the paper references '[1] Martín Arjovsky et al., Invariant risk minimization, 2019'. For the house sales prices dataset, it states 'Kaggle) of house sales prices from King County, USA3' with a footnote link 'https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data'.
Dataset Splits Yes In training, we set d = 5 and generate 2000 data points, where 50% points are from environment e1 with r1 = 0.9 and the other from environment e2 with r2... For IRM, we sample 1000 data from the two training environments respectively and select the hyper-parameters which maximize the minimum accuracy of two validation environments.
Hardware Specification No The paper describes the methods and experiments but does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers, such as programming language versions, library versions, or framework versions.
Experiment Setup Yes For all experiments, we use a two-layer MLP with 1024 hidden units... For our method, we set the cluster number K = 2... For IRM, we sample 1000 data from the two training environments respectively and select the hyper-parameters which maximize the minimum accuracy of two validation environments.