Learning from Biased Data: A Semi-Parametric Approach

Authors: Patrice Bertail, Stephan Clémençon, Yannick Guyonvarch, Nathan Noiry

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4. Numerical Experiments In this section, we present two numerical experiments which complement the previous theoretical analysis. We start with a simulated dataset where we design both the source and target distributions. We then turn to a more realistic framework: we use the Life Expectancy Dataset and create some distributional shifts on the observations.
Researcher Affiliation Academia 1Universit e Paris-Nanterre, France 2T el ecom Paris, France.
Pseudocode Yes Figure 1. The Rw-ERM Algorithm
Open Source Code No The paper does not explicitly state that the source code for the methodology is openly available or provide a link to it.
Open Datasets No We use the Life Expectancy Dataset and only keep the Adult Mortality Rate (x1) and the Alcohol Consumption (x2) features in order to predict the Life Expectancy (y) output. The paper does not provide concrete access information (link, DOI, formal citation with author/year) for this dataset.
Dataset Splits No The paper mentions dividing data into G1 (training/source) and G2 (test set), but does not explicitly describe a separate validation split with specific percentages or counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No Popular implementations of ERM-like learning procedures such as scikit-learn (Pedregosa et al., 2018) support a weight option... The version number for scikit-learn is not specified.
Experiment Setup Yes To estimate α , we implement a gradient descent algorithm to minimize Ψnobs. To avoid getting trapped in potential local minima, we rerun the descent algorithm nboot times using a bootstrapping rationale presented in the Supplementary Material. Among the sequence (α(b))nboot b=1 thus constructed, we select arg minα (α(b)) nboot b=1 Ψnobs(α(b)) as our final estimator ˆα. In the last step, we train several regression-type algorithms (OLS, SVR, RF) on (Zi)nobs i=1 with weights (g(Zi, ˆα))nobs i=1 . ... for the following choices of parameters: nobs = 10, 000, ntest = 500, nrep = 100 and nboot = 100. The SVR algorithm is run for three different values of the parameter C (0.01, 0.1 and 1).