Lifelong Domain Adaptation via Consolidated Internal Distribution

Authors: Mohammad Rostami

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our method using standard UDA benchmarks.Our implemented code is accessible as an Appendix. For comparison purpose, we run our algorithm in the learning setting of the existing UDA methods with one source and one target domains. For fair comparison against these works, we have followed the evaluation protocols that are used by most of the recent classic UDA papers. We use four existing UDA benchmark datasets, adopt them to build sequential UDA tasks, and validate our method on these classic datasets.Learning curves for the above eight sequential UDA tasks are visualized in Figure 2. In these learning curves, the model has been trained for 100 epochs, i.e., a notion for time, for each task and then we have moved forward to learn the subsequent task. We have stored 10 samples per class per domain in the memory buffer for experience replay.
Researcher Affiliation Academia Mohammad Rostami USC Information Sciences Institute Los Angeles, CA 90007 rostamim@usc.edu
Pseudocode Yes Algorithm 1 LDAu CID (λ, τ, Nb)
Open Source Code Yes Our implemented code is accessible as an Appendix.
Open Datasets Yes We use four existing UDA benchmark datasets... Digit recognition tasks: the common MNIST (M), the USPS (U), and the SVHN (S) datasets... Image CLEF-DA Dataset: this dataset consists of the 12 shared image classes between the Caltech256 (C), the ILSVRC 2012 (I), and the Pascal VOC 2012 (P) visual recognition datasets.Office-Home Dataset... Artistic images (A), Clip Art (C), Product images (P), and Real-World images (R)... Office-Caltech Dataset: this object recognition dataset is built using the 10 shared classes between the Office-31 and Caltech-256 datasets.
Dataset Splits No The paper mentions 'testing split' and 'training epochs' and refers to 'evaluation protocols that are used by most of the recent classic UDA papers', implying standard splits. However, it does not explicitly state exact percentages, sample counts, or specific citations for the train/validation/test splits, which are needed for full reproducibility.
Hardware Specification No The paper mentions using 'VGG16 network' and 'Res Net-50 network' as backbone models, which implies the use of GPUs, but it does not specify any particular hardware components such as GPU models (e.g., NVIDIA A100), CPU models, or memory specifications used for the experiments.
Software Dependencies No The paper mentions using 'VGG16 network' and 'Res Net-50 network' as backbone architectures and the 'UMAP [54] visualization tool'. However, it does not provide specific version numbers for any software, libraries, or frameworks (e.g., Python version, PyTorch/TensorFlow version, scikit-learn version).
Experiment Setup Yes We train the base model using the source labeled data. In these learning curves, the model has been trained for 100 epochs...We have stored 10 samples per class per domain in the memory buffer for experience replay.Algorithm 1 LDAu CID (λ, τ, Nb) (Algorithm inputs/hyperparameters are explicitly mentioned).We have studied the effect of the values for the hyperparameters λ and τ on the model performance for the binary UDA task S M in Figure 3e and Figure 3f to suggest how to tune these parameters for practical usage. We also observe in Figure 3f that when the confidence parameter is τ 1, the model performance on the target domain improves.