Doubly Robust Covariate Shift Correction

Authors: Sashank Reddi, Barnabas Poczos, Alex Smola

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We prove that this yields an efficient estimator and demonstrate good experimental performance. [...] Finally, we show good experimental performance on several UCI datasets.
Researcher Affiliation Academia Sashank J. Reddi Machine Learning Department Carnegie Mellon University sjakkamr@cs.cmu.edu Barnab as P oczos Machine Learning Department Carnegie Mellon University bapoczos@cs.cmu.edu Alex Smola Machine Learning Department Carnegie Mellon University alex@smola.org
Pseudocode No The paper describes the algorithm steps in textual form but does not provide a formal pseudocode block or algorithm listing.
Open Source Code No The paper does not provide a link to open-source code for the described methodology or state that code will be released.
Open Datasets Yes Real Data: For a more realistic comparison we apply our method to several UCI3 and benchmark4 datasets. To control the amount of bias we use PCA to obtain the leading principal component. The projections onto the first principal component are then used to construct a subsampling distribution q. [...] 3http://archive.ics.uci.edu/ml/datasets.html 4http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
Dataset Splits Yes The regularization parameters are chosen separately for each empirical estimator by cross validation.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or memory used for experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes We use linear regression with standard ℓ2 penalization. [...] The bandwidth of the kernel is chosen by cross-validation. [...] The regularization parameters are chosen separately for each empirical estimator by cross validation. [...] As explained earlier, we first train a regression tree on the unweighted dataset and then build a differential regression tree on the residual with restricted tree depth in order to train the doubly robust regression tree.