Generalizing to Evolving Domains with Latent Structure-Aware Sequential Autoencoder
Authors: Tiexin Qin, Shiqi Wang, Haoliang Li
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on both synthetic and real-world datasets show that LSSAE can lead to superior performances based on the evolving domain generalization setting. |
| Researcher Affiliation | Academia | Tiexin Qin 1 Shiqi Wang 1 Haoliang Li 1 1City University of Hong Kong, Hong Kong. |
| Pseudocode | Yes | Algorithm 1 Optimization procedure for LSSAE |
| Open Source Code | No | The paper does not contain an explicit statement about releasing its own source code or provide a link to a repository for the methodology described. |
| Open Datasets | Yes | We compare the proposed LSSAE with other DG methods on two synthetic datesets (Circle and Sine) and four real-world datasets (Rotated MNIST, Portraits, Caltran, Power Supply). |
| Dataset Splits | Yes | We split the domains into source domains, intermediate domains and target domains with the ratio of {1/2 : 1/6 : 1/3}. The intermediate domains are utilized as validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running its experiments. |
| Software Dependencies | No | The paper states 'All of our experiments are implemented in the Py Torch platform based on Domain Bed package (Gulrajani & Lopez-Paz, 2021)', but does not provide specific version numbers for PyTorch, Domain Bed, or any other software dependencies. |
| Experiment Setup | Yes | We list the values of hyperparameters for different datasets below. All models are optimized by Adam (Kingma & Ba, 2015). In our experiments, we found that keeping the balance of the three KL divergence terms for zc, zw and zv via adjusting the value of λ1, λ2 and λ3 is beneficial for the final results. |