Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Algorithms and Theory for Supervised Gradual Domain Adaptation
Authors: Jing Dong, Shiji Zhou, Baoxiang Wang, Han Zhao
TMLR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on both semi-synthetic and large-scale real datasets corroborate our findings and demonstrate the effectiveness of our objectives. |
| Researcher Affiliation | Academia | Jing Dong EMAIL The Chinese University of Hong Kong, Shenzhen; Shiji Zhou EMAIL Tsinghua University; Baoxiang Wang EMAIL The Chinese University of Hong Kong, Shenzhen; Han Zhao EMAIL University of Illinois Urbana-Champaign |
| Pseudocode | No | The paper provides mathematical objective functions (e.g., Equations 4 and 5) but does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or provide links to a code repository. |
| Open Datasets | Yes | We conduct our experiments on Rotating MNIST, Portraits, and FMOW, with a detailed description of each dataset in the appendix. - FMOW (Christie et al., 2018) - A century of portraits: A visual historical record of american high school yearbooks. In International Conference on Computer Vision Workshops, 2015. (Ginosar et al., 2015) |
| Dataset Splits | No | Each experiment is repeated over 5 random seeds and reported with the mean and 1 std. While random seeds are mentioned, specific dataset split percentages, sample counts, or predefined splits are not provided. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper does not provide specific details about software dependencies, such as library names with version numbers. |
| Experiment Setup | No | The paper mentions comparing different adaptation methods (No Adaptation, Direct Adaptation, Multiple Source Domain Adaptation, Gradual Adaptation) and using CNN and LSTM architectures, but it does not provide specific hyperparameters like learning rates, batch sizes, or training epochs in the main text. |