Correcting Covariate Shift with the Frank-Wolfe Algorithm

Authors: Junfeng Wen, Russell Greiner, Dale Schuurmans

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An empirical study then demonstrates the effectiveness and efficiency of the Frank-Wolfe algorithm for correcting covariate shift in practice.In this section, we demonstrate the results of the proposed approach on both synthetic and some benchmark datasets.
Researcher Affiliation Academia Department of Computing Science University of Alberta Edmonton, AB, Canada
Pseudocode No The paper describes algorithmic steps using mathematical equations (e.g., equations (1) and (2)) but does not provide a formally structured pseudocode or algorithm block.
Open Source Code No The paper does not provide any explicit statements about releasing source code for the described methodology, nor does it include links to a code repository.
Open Datasets Yes Next we applied the reweighting methods on some benchmark datasets from the libsvm2 and delve3 libraries to show their performance in correcting covariate shift on reweighted learning.2www.csie.ntu.edu.tw/~cjlin/libsvm 3www.cs.toronto.edu/delve/data/datasets.html
Dataset Splits No The paper mentions drawing 'n = 5000 training points' and 'm = 2000 test points' but does not specify a validation set or a train/validation/test split for reproducibility. It only describes the process of selecting training and test sets.
Hardware Specification No The paper does not specify any hardware details such as CPU models, GPU models, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using 'libsvm' and 'delve' libraries but does not provide specific version numbers for these or any other software dependencies needed for replication.
Experiment Setup Yes In the experiments, a Gaussian kernel is applied to KMM where the kernel width is chosen to be the median of the pairwise distances over the training set. For KLIEP, the width is chosen according to the criterion of Sugiyama et al. [2008]. For KMM(FW) we use 1{pt 1q as the step size, while for KLIEP(FW) we use the line search step size, since these choices are faster in practice.