The balancing principle for parameter choice in distance-regularized domain adaptation

Authors: Werner Zellinger, Natalia Shepeleva, Marius-Constantin Dinu, Hamid Eghbal-zadeh, Hoan Duc Nguyen, Bernhard Nessler, Sergei Pereverzyev, Bernhard A. Moser

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically investigate the performance of our approach based on two target error bounds, two parameter selection methods, three datasets and different domain adaptation methods.
Researcher Affiliation Collaboration 1Software Competence Center Hagenberg Gmb H 2Institute for Machine Learning, Johannes Kepler University Linz 3Dynatrace Research 4Institute of Computational Perception, Johannes Kepler University Linz 5LIT AI Lab, Johannes Kepler University Linz 6Johann Radon Institute for Computational and Applied Mathematics, Austrian Academy of Sciences
Pseudocode Yes Algorithm 1: Balancing principle for domain adaptation (BPDA)
Open Source Code Yes The source-code can be found at https://github.com/Xpitfire/bpda
Open Datasets Yes We also use the Amazon Reviews dataset [53]. ... Our third dataset is the Domain Net 2019 dataset ... [56].
Dataset Splits No We follow [23] and use held-out validation, i.e. we hold out a part of the training data as validation set, and we compute the importance weights based on this validation set. The paper states that a part of the training data is held out for validation but does not specify the exact proportion or number of samples for this validation split.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper mentions software components like 'Res Net-18' and general methods, but does not provide specific version numbers for programming languages, libraries, or frameworks (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup No The details of all neural network architectures used, as well as the training strategy and hyperparameters are provided in the supplementary material.