Learning to Adapt to Evolving Domains

Authors: Hong Liu, Mingsheng Long, Jianmin Wang, Yu Wang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments validate the effectiveness our method on evolving domain adaptation benchmarks.
Researcher Affiliation Academia School of Software, KLiss, BNRist, Tsinghua University Department of Electronic Engineering, Tsinghua University
Pseudocode Yes Algorithm 1 Meta-Training of Evolution Adaptive Meta-Learning (EAML)
Open Source Code Yes 2Codes are available at https://github.com/Liuhong99/EAML.
Open Datasets Yes Rotated MNIST: This dataset consists of MNIST digits of various rotations. Evolving Vehicles: This dataset contains sedans and trucks from the 1970s to 2010s (See Figure 1). Caltran: This is a real-world dataset of images captured by a camera at an intersection over time.
Dataset Splits Yes In the meta-training phase we have access to adequate labeled examples from the source domain, and part of the target unlabeled data from a target domain evolving over time in the meta-training phase, (2) new target data of the meta-testing phase arrive sequentially online from the same evolving target distribution and cannot be stored... We randomly sample a trajectory T = {Xt1, Xt2 Xtn}, and a target trajectory from the query set as Tqry = {X t1, X t2 X tn}.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models.
Software Dependencies No The paper states, 'We implement our method on Py Torch,' but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes We use SGD with 0.9 momentum and 5 10 4 weight decay. The learning rates of the inner loop and the outer loop are set to 0.01 and 0.001 respectively. For rotated MNIST, we use Le Net [16] as the backbone. The meta-representation fθ includes two convolutional layers. The adapter is a two-layer fully-connected network with Re LU activations. For Evolving Vehicles and Caltran, we use a six-layer convolutional network as the backbone. The meta-representation fθ includes four convolutional layers... The adapter is a two-layer convolutional network with Re LU activations.