Gradual Domain Adaptation without Indexed Intermediate Domains

Authors: Hong-You Chen, Wei-Lun Chao

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validated IDOL on two data sets studied in [35], including Rotated MNIST [36] and Portraits over years [14]. IDOL can successfully discover the domain sequence that leads to comparable GDA performance to using the pre-defined sequence (i.e., by side information).
Researcher Affiliation Academia Hong-You Chen The Ohio State University, USA chen.9301@osu.edu Wei-Lun Chao The Ohio State University, USA chao.209@osu.edu
Pseudocode Yes Meta-reweighting for Equation 8 can be implemented via the following six steps for multiple iterations. 1. Detach: θ θm, 2. Forward: θ(q) θ ηθ |U\m| P i U\m qi ℓ(f(xi; θ), sharpen(f(xi; θm))) 3. Detach: θ θ(q), 4. Backward: θ(q) θ(q) ηθ |Um| P j Um ℓ(f(xj; θ(q)), sharpen(f(xj; θ ))) 5. Update: q q ηq |Um| P j Um ℓ(f(xj; θ(q)), sharpen(f(xj; θm))) 6. Update: qi max{0, qi}.
Open Source Code Yes Codes are available at https://github.com/hongyouc/IDOL.
Open Datasets Yes We validated IDOL on two data sets studied in [35], including Rotated MNIST [36] and Portraits over years [14].
Dataset Splits Yes For both datasets, each domain contains 2000 images, and 1, 000 images are reserved for both the source and target domains for validation.
Hardware Specification No No specific hardware details (like GPU/CPU models or types of compute clusters) used for running the experiments are mentioned, beyond the general acknowledgment of computational resources by the Ohio Supercomputer Center.
Software Dependencies No The paper mentions 'Adam optimizer [32]' but does not provide specific version numbers for software dependencies such as programming languages or libraries.
Experiment Setup Yes We follow the setup in [35]: each model is a convolutional neural network trained for 20 epochs for each domain consequently (including training on the source data), using Adam optimizer [32] with a learning rate 0.001, batch size 32, and weight decay 0.02. We use this optimizer as the default if not specified. Hyper-parameters of IDOL include K = 2M rounds for progressive training and 30 epochs of refinement per step (with mini-batch 128), where M = 19 for the Rotated MNIST and M = 7 for the Portraits.