Distribution-Informed Neural Networks for Domain Adaptation Regression

Authors: Jun Wu, Jingrui He, Sheng Wang, Kaiyu Guan, Elizabeth Ainsworth

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The efficacy of our framework is also empirically verified on a variety of domain adaptation regression benchmarks. We experimentally investigate the performance of our DINO framework on a variety of domain adaptation regression benchmarks, and show its effectiveness over state-of-the-art baselines. The experiments demonstrate the effectiveness of our DINO framework over state-of-the-art baselines.
Researcher Affiliation Academia Jun Wu, Jingrui He, Sheng Wang, Kaiyu Guan, Elizabeth Ainsworth University of Illinois Urbana-Champaign {junwu3,jingrui,sheng12,kaiyug,ainswort}@illinois.edu
Pseudocode Yes As illustrated in Algorithm 1, we use all the training source (target) examples as the basis source (target) examples xr nr. (Section 4.2). Appendix A.1 contains "Algorithm 1: DINO-INIT Training and Prediction".
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See the supplemental material.
Open Datasets Yes Following [10], we use two image data sets: d Sprites [40] and MPI3D [23]. In addition, we also use a plant phenotyping data set. (Section 5). These datasets are well-known and cited, implying public availability.
Dataset Splits No The paper describes the use of source and target labeled examples for training and refers to testing on unlabeled target examples. However, it does not explicitly define a separate validation dataset split with specific percentages or counts for hyperparameter tuning or early stopping during training.
Hardware Specification Yes Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix A.11. All the experiments are implemented in PyTorch with a NVIDIA GeForce RTX 3090 GPU (Appendix A.11).
Software Dependencies No The paper mentions "implemented in PyTorch" (Appendix A.11) and that "The induced NNGP and neural tangent kernels induced can be estimated using the Neural Tangents package [41]" (Section 5), but it does not specify version numbers for PyTorch or Neural Tangents.
Experiment Setup Yes In the experiments, our algorithms are implemented using a L-layer (L = 6) fully-connected neural network with Re LU (see Appendix A.11 for more details). In addition, we set = 0.5 and µ = 0.1 for DINO-TRAIN (Section 5 - Implementations). The neural network structure is a 6-layer MLP with ReLU activation functions, and the width for each hidden layer is 1024. For training DINO-TRAIN, we use Adam optimizer with learning rate 0.001 and batch size 64 (Appendix A.11).