Evolving Standardization for Continual Domain Generalization over Temporal Drift

Authors: Mixue Xie, Shuang Li, Longhui Yuan, Chi Liu, Zehui Dai

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on multiple real-world datasets including images and texts validate the efficacy of our Evo S.
Researcher Affiliation Collaboration Mixue Xie1 Shuang Li1, Longhui Yuan1 Chi Harold Liu1 Zehui Dai2 1Beijing Institute of Technology, China 2Lazada Search & Monetisation Tech, China
Pseudocode Yes Algorithm 1: Training procedure for Evo S" and "Algorithm 2: Inference procedure for Evo S" are provided in Appendix C.
Open Source Code Yes Code is available at https://github.com/BIT-DA/Evo S.
Open Datasets Yes Thanks to the work in [56], several real-world datasets with distribution shifts over time have been available. And we evaluate Evo S on three image classification datasets (Yearbook and f Mo W from [56] and RMNIST) and two text classification datasets (Huffpost and Arxiv from [56]).
Dataset Splits Yes For each training domain of all datasets, we randomly select 90% data as training split and 10% data as validation split.
Hardware Specification Yes We run each task on a single NVIDIA Ge Force RTX 3090 GPU for three random trials.
Software Dependencies No The paper mentions 'All experiments are implemented via Py Torch' but does not provide a specific version number for PyTorch or any other software dependencies.
Experiment Setup Yes For optimization, we use the Adam optimizer with lr = 1e 3 for Yearbook and RMNIST, lr = 2e 4 for f Mo W and lr = 2e 5 for Huffpost and Arxiv. The batch size is set to 64 for all datasets. As for hyper-parameters, we select them via grid search using the validation splits of training domains and finally use α = 2.0 for RMNIST, α = 1.0 for others, λ = 1.0, W = 3 for all datasets.