Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Towards a Unified Analysis of Kernel-based Methods Under Covariate Shift
Authors: Xingdong Feng, Xin HE, Caixing Wang, Chao Wang, Jingnan Zhang
NeurIPS 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive numerical studies on synthetic and real examples confirm our theoretical findings and further illustrate the effectiveness of our proposed method. |
| Researcher Affiliation | Academia | Xingdong Feng1, Xin He1, Caixing Wang1 , Chao Wang1, Jingnan Zhang2 1School of Statistics and Management, Shanghai University of Finance and Economics 2International Institute of Finance, School of Management, University of Science and Technology of China |
| Pseudocode | No | The paper describes mathematical formulations and procedures in prose, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for the methodology or links to a code repository. |
| Open Datasets | Yes | We consider the binary classification problem on the Raisin dataset, which is available in https: //archive.ics.uci.edu/ml/datasets.php. |
| Dataset Splits | No | The paper states 'the data are first randomly split into source and target datasets' and mentions 'We use importance weighted cross validation (IWCV) [...] to tune the truncation parameter γn.' While it describes cross-validation for tuning, it does not provide specific training/validation/test dataset splits (e.g., percentages or exact counts) for the overall experiment. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using specific procedures like 'Kullback-Leibler importance estimation procedure (KLIEP)' and 'importance weighted cross validation (IWCV)', but it does not specify version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | We compare the averaged mean square error (MSE) and empirical excess risk of the unweighted estimator and weighted estimator, either with true or estimated weights across different choices of regularization parameter λ, source sample size n, and target sample size m. We set the turning parameter Cλ = (nλ) 1. For the TIRW estimator, we use importance weighted cross validation (IWCV) (Sugiyama et al., 2007a) to tune the truncation parameter γn. |