Domain Adaptation with Adversarial Training on Penultimate Activations
Authors: Tao Sun, Cheng Lu, Haibin Ling
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on popular UDA benchmarks under both standard setting and source-data free setting. The results validate that our method achieves the best scores against previous arts. |
| Researcher Affiliation | Collaboration | Tao Sun 1, Cheng Lu 2, Haibin Ling 1 1 Stony Brook University, USA 2 XPeng Motors, USA |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https: //github.com/tsun/APA. |
| Open Datasets | Yes | Datasets. Office-Home (OH) has 65 classes from four domains: Artistic (A), Clip Art (C), Product (P), and Realworld (R). We use both the original version and the RS-UT (Reverse-unbalanced Source and Unbalanced Target) version (Tan, Peng, and Saenko 2020) that is manually created to have a large label shift. Vis DA-2017 (Peng et al. 2017) is a synthetic-to-real dataset of 12 objects. Domain Net (Peng et al. 2019) (DN) is a large UDA benchmark. We use the 40-class version (Tan, Peng, and Saenko 2020) from four domains: Clipart (C), Painting (P), Real (R), Sketch (S). |
| Dataset Splits | No | The paper describes the use of source and target domains and discusses training parameters, but it does not explicitly state specific train/validation/test dataset splits (e.g., percentages or sample counts) needed for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper states 'We implement our methods with Py Torch.' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | For all tasks, we use batch size 16, β = 0.1, τ = 0.75, ϵ = 30 for APAu, ϵ = 1.0 for APAn, with an only exception on Vis DA where we use β = 0.04 for APAn instead. |