Adapting to Continuous Covariate Shift via Online Density Ratio Estimation
Authors: Yu-Jie Zhang, Zhen-Yu Zhang, Peng Zhao, Masashi Sugiyama
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct experiments to evaluate our approach, and the empirical results validate the theoretical findings. |
| Researcher Affiliation | Academia | 1 The University of Tokyo, Chiba, Japan 2 RIKEN AIP, Tokyo, Japan 3 National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China |
| Pseudocode | Yes | Algorithm 1 Base-learner Ei, Algorithm 2 Meta-learner |
| Open Source Code | No | The paper does not contain any statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | Diabetes (Dia) is a tabular UCI dataset. Breast (Bre) is a tabular UCI dataset. MNIST-SVHN (M-S)... CIFAR10-CINIC10 (C-C)... Yearbook [53]: This dataset contains the 37,921 frontal-facing American high school yearbook photos from 1930 to 2013 [54]. |
| Dataset Splits | No | The paper does not explicitly provide details about training/validation/test splits with percentages or absolute counts. It mentions evaluating on an "unlabeled test stream" but not validation sets. |
| Hardware Specification | Yes | We run experiments with two Xeon Gold 6248R processors (24 cores, 3.0GHz base, 4.0GHz boost), eight Tesla V100S GPUs (32GB video memory each), 768GB RAM, all managed by the Ubuntu 20.04 operating system. |
| Software Dependencies | No | The paper mentions 'Ubuntu 20.04 operating system' but does not list specific versions for other key software components like programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow, scikit-learn). |
| Experiment Setup | Yes | For parameterization in the Accous implementation, we set R by directly calculating the data norm and set S = d/2 for all experiments. ... For the MNIST-SVHN, CIFAR10-CINIC10, and Yearbook datasets, the deep model is used for density ratio estimation and predictive model training. The deep models consist of a backbone and a linear layer. In all experiments, only the linear layer of the neural network is updated, while the backbone parameters trained with offline data remain fixed. ... The backbone is a pre-trained Res Net34 backbone from torchvision with its weights initialized by training on Image Net. |