Overcoming Concept Shift in Domain-Aware Settings through Consolidated Internal Distributions
Authors: Mohammad Rostami, Aram Galstyan
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on five benchmarks and observe that our algorithm compares favorably against SOTA UDA methods.Empirical Evaluation: Since sequential model adaptation is not a well-explored problem, we follow the UDA literature for evaluation due to the topic proximity. |
| Researcher Affiliation | Academia | Information Sciences Institute, University of Southern California {mrostami, galstyan}@isi.edu |
| Pseudocode | Yes | Algorithm 1: SDAUP (λ, ITR) |
| Open Source Code | Yes | Our code is provided at https://github.com/rostami-m/SDAUP. |
| Open Datasets | Yes | We validate our method on five standard UDA benchmarks and adapted them for sequential task learning: Digit recognition tasks, Office-31 Dataset, Image CLEF-DA Dataset, Office-Caltech Dataset, and Vis DA-2017. Details about these datasets are included in the Appendix. |
| Dataset Splits | No | The paper mentions using source and target datasets but does not explicitly provide specific numerical training, validation, and test split percentages or counts for any of the datasets. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers, such as programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | A point of strength for our algorithm is that there are only two major algorithm-specific hyper-parameters and tuning them is not challenging. We set τ = 0.99 and λ = 10^-3. |