Reconfigurability-Aware Selection for Contrastive Active Domain Adaptation

Authors: Zeyu Zhang, Chun Shen, Shuai Lü, Shaojie Zhang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on multiple benchmarks demonstrate the superiority of RASC.
Researcher Affiliation Academia 1Key Laboratory of Symbolic Computation and Knowledge Engineering (Jilin University), Ministry of Education, China 2College of Computer Science and Technology, Jilin University, China 3College of Software, Jilin University, China
Pseudocode No The paper describes the methodology in prose and mathematical equations but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/zeyuz22/RASC.
Open Datasets Yes Datasets. To evaluate the superiority of RASC, we conduct experiments on four benchmark datasets: Office-31 [Saenko et al., 2010] is a mainstream DA dataset. It contains 3 domains: Amazon (A), Webcam (W), DSLR (D) and 31 categories. Office-Home [Venkateswara et al., 2017] is a more challenging dataset. It contains 4 domains: Artistic (A), Clipart (C), Product (P), Real-World (R) and 65 categories. Vis DA [Peng et al., 2017] is a large-scale dataset containing 150k synthetic images and 55k real images. It contains two domains: Syntatic (S), Real (R) and 12 categories. Mini Domain Net [Zhou et al., 2021] is a subset of Domain Net [Peng et al., 2019], consisting of four domains: Clipart (C), Painting (P), Real (R), Sketch (S) and 126 categories.
Dataset Splits No The paper mentions 'labeling budget B to 5% and conduct 5 rounds of selection' and discusses training with 'source domain Ds' and 'unlabeled target domain Dut', but it does not specify explicit training, validation, and test splits (e.g., percentages or counts for each).
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as CPU, GPU models, or memory.
Software Dependencies No The paper states, 'All experiments are implemented using the Pytorch platform.' However, it does not specify version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes For training, we use SGD optimizer with a learning rate of 0.01, momentum of 0.9, and weight decay 5e-4. We set the prototype update trade-off parameter λ to 0.9, the temperature τ in the similarity function to 0.1 and the trade-off parameters α and β to 0.5 and 0.7 respectively. The batch size is 36 for Vis DA and 32 for other datasets. For sample selection, we set the labeling budget B to 5% and conduct 5 rounds of selection.