Discrepancy-Based Active Learning for Domain Adaptation

Authors: Antoine de Mathelin, François Deheeger, Mathilde MOUGEOT, Nicolas Vayatis

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our numerical experiments show that the proposed algorithm is competitive against other state-of-the-art active learning techniques in the context of domain adaptation, in particular on large data sets of around one hundred thousand images.
Researcher Affiliation Collaboration Antoine de Mathelin1,2, Franc ois Deheeger1, Mathilde Mougeot3,2, Nicolas Vayatis2 1Michelin, 2Centre Borelli, Universit e Paris-Saclay, CNRS, ENS Paris-Saclay, 3ENSIIE
Pseudocode Yes Algorithm 1 Accelerated K-medoids; Algorithm 2 K-Medoids Greedy; Algorithm 3 Branch & Bound Medoid (B & B)
Open Source Code Yes The source code is provided on Git Hub 1. https://github.com/antoinedemathelin/dbal
Open Datasets Yes We choose Superconductivity (Hamidieh, 2018; Dua & Graff, 2017); The office data set (Saenko et al., 2010); a synthetic digits data set: SYNTH is used to learn a classification task for a data set of real digits pictures: SVHN (Street-View House Number) (Netzer et al., 2011).
Dataset Splits No The paper states 'fine-tuning of the optimization hyper-parameters (epochs, batch sizes...) is performed using only source labeled data.' This implies a validation process but does not specify how the data itself was split into distinct training, validation, and test sets with specific percentages or counts.
Hardware Specification Yes The experiments have been run on a (2.7GHz, 16G RAM) computer.
Software Dependencies No The paper mentions 'Python 3.8', but it does not specify version numbers for other key libraries or tools like PyTorch, scikit-learn (which is cited but no version is given for its use in this paper), ADAPT2, or Adam optimizer.
Experiment Setup Yes We use a learning rate of 0.001, a number of epochs of 100, a batch size of 128 and the mean squared error as loss function.