Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Target Semantics Clustering via Text Representations for Robust Universal Domain Adaptation

Authors: Weinan He, Zilei Wang, Yixin Zhang

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, we evaluate the universality of Uni DA algorithms under four category shift scenarios. Extensive experiments on four benchmarks demonstrate the effectiveness and robustness of our method, which has achieved state-of-the-art performance.
Researcher Affiliation Academia 1University of Science and Technology of China, Hefei, China 2Institute of Artificial Intelligence, Hefei Comprehensive National Science Center
Pseudocode No The paper mentions summarizing a process in the Appendix for clarity and rigor, but no explicit pseudocode or algorithm block is present in the main text provided.
Open Source Code Yes Code https://github.com/Sapphire-356/TASC
Open Datasets Yes Our method will be validated on four popular datasets in Domain Adaptation, i.e., Office (Saenko et al. 2010), Office-Home (Venkateswara et al.), Vis DA (Peng et al. 2018), and Domain Net (Peng et al. 2019).
Dataset Splits Yes Detailed classes split in these scenarios are summarized in Appendix, which is the same as dataset split in (Qu et al. 2023).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using a pre-trained CLIP model with Vi T-B/16 and Transformer, and Lo RA, but does not provide specific version numbers for these or any other software libraries or frameworks.
Experiment Setup Yes Lo RA (Hu et al. 2021) is used in all transformer blocks in both image and text encoders with rank = 8. We adopt the same learning rate scheduler η = η0 (1 + 10 p) 0.75 as (Long et al.; Liang, Hu, and Feng), where p is the training progress changing from 0 to 1 and η0 = 0.0001. For the hyper-parameters, we empirically set the λdiv to 0.6,...τ is set to 0.02. In the discrete optimization step of TASC, nc = 300, γent = 0.3, and Nouter = 20. K0 is set to 100 for Office, Office-Home, and Vis DA, but 400 for Domain Net.