Enhancing Evolving Domain Generalization through Dynamic Latent Representations
Authors: Binghui Xie, Yongqiang Chen, Jiaqi Wang, Kaiwen Zhou, Bo Han, Wei Meng, James Cheng
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results on both synthetic and real-world datasets demonstrate that MISTS succeeds in capturing both evolving and invariant information, and present promising results in evolving domain generalization tasks. |
| Researcher Affiliation | Academia | Binghui Xie1, Yongqiang Chen1, Jiaqi Wang1, Kaiwen Zhou1, Bo Han2, Wei Meng1, James Cheng1 1The Chinese University of Hong Kong 2Hong Kong Baptist University |
| Pseudocode | Yes | Algorithm 1: Optimization procedure of MISTS |
| Open Source Code | No | The paper does not provide any explicit statement about releasing open-source code for the described methodology, nor does it include a link to a code repository. |
| Open Datasets | Yes | To evaluate the effectiveness of MISTS, we conducted experiments on both synthetic and real-world datasets, following the setting of LSSAE (Qin, Wang, and Li 2022). Specifically, we compared our approach with invariant learning methods and the state-of-the-art EDG methods on three synthetic datasets (Circle, Sine and Rotated MNIST) and three real-world datasets (Portraits, Caltran, and Elec). We also evaluated the results on one additional variant, Sine-C, which was created for EDG settings by Qin, Wang, and Li (2022). The paper provides formal citations for these datasets, e.g., 'The Circle dataset (Pesaranghader and Viktor 2016)', 'The Portraits dataset (Ginosar et al. 2015)', 'The Caltran dataset (Hoffman, Darrell, and Saenko 2014)', 'The Elec dataset (Dau et al. 2019)', 'The Rotated MNIST (RMNIST) dataset (Ghifary et al. 2015)'. |
| Dataset Splits | Yes | The domains were split into the source, intermediate, and target domains with a ratio of 1/2 : 1/6 : 1/3, with the intermediate domains used as the validation set. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run its experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used (e.g., Python, PyTorch, TensorFlow, CUDA). |
| Experiment Setup | No | The paper mentions aspects of the experimental setup such as dataset splitting ('The domains were split into the source, intermediate, and target domains with a ratio of 1/2 : 1/6 : 1/3') and variables in Algorithm 1 like 'Training epochs E; Batch Size B'. However, it does not provide concrete numerical values for these hyperparameters or other system-level training settings like learning rates, optimizers, or specific model initialization values in the main text. |