Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Protecting Model Adaptation from Trojans in the Unlabeled Data

Authors: Lijun Sheng, Jian Liang, Ran He, Zilei Wang, Tieniu Tan

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments across commonly used benchmarks and adaptation methods demonstrate the effectiveness of DIFFADAPT.
Researcher Affiliation Academia 1 University of Science and Technology of China 2 NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences 3 University of Chinese Academy of Sciences 4 Nanjing University
Pseudocode No The paper describes the defense method DIFFADAPT through textual descriptions and a framework illustration in Fig. 2, but it does not include a structured pseudocode or algorithm block.
Open Source Code Yes Code https://github.com/Tom Sheng21/Diff Adapt
Open Datasets Yes We evaluate our framework on three commonly used model adaptation benchmarks from image classification tasks. Office (Saenko et al. 2010) is a classic model adaptation dataset... Office Home (Venkateswara et al. 2017) is a popular dataset... Domain Net (Peng et al. 2019) is a large-size challenging benchmark...
Dataset Splits Yes In our experiments, we divide 80% of the target domain samples as the unlabeled training set for adaptation and the remaining 20% as the test set for metric calculation.
Hardware Specification No The paper mentions using
Software Dependencies No The paper mentions using
Experiment Setup Yes For all experiments, we simply set the noise pixels to be sampled from a uniform distribution [0.25, 0.25]. DIFFADAPT is a plug-and-play defense method for existing model adaptation algorithms, so no additional hyperparameters are introduced. Other details are consistent with the official settings of the adaptation algorithms. [...] For the non-optimization-based trigger, we use the Hello Kitty trigger in Blended (Chen et al. 2017) directly. The optimization-based trigger is implemented by GAP (Poursaeed et al. 2018) and Linf norm is 0.5 in a 120  120 patch for Office and 100  100 patch for others.