Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Semantics-Adaptive Activation Intervention for LLMs via Dynamic Steering Vectors

Authors: Weixuan Wang, JINGYUAN YANG, Wei Peng

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate the effectiveness of SADI, we conduct extensive experiments using four diverse model backbones: LLAMA2-7B-CHAT, BLOOMZ-7B, MISTRAL-7B, FALCON-7B-INSTRUCT across eleven widely used benchmarks. The experiments involve a comprehensive range of tasks, from multiple-choice tasks (COPA, Story Cloze, NLI, MMLU, SST2, SST5, Bool Q, Winogrande), to openended generation tasks (Trivia QA, Toxi Gen, and Truthful QA). Our experimental results reveal that SADI significantly outperforms existing activation intervention methods.
Researcher Affiliation Collaboration 1School of Informatics, University of Edinburgh 2Huawei Technologies Co., Ltd. 3School of Engineering, RMIT University EMAIL EMAIL EMAIL
Pseudocode Yes Algorithm 1: SADI: Semantics-Adaptive Dynamic Intervention
Open Source Code Yes 1https://github.com/weixuan-wang123/SADI
Open Datasets Yes For the multiple-choice tasks, we use datasets: COPA (Gordon et al., 2012), Story Cloze (Mostafazadeh et al., 2016), NLI (Bowman et al., 2015), MMLU (Hendrycks et al., 2021), SST2 (Socher et al., 2013), SST5 (Socher et al., 2013), Bool Q (Clark et al., 2019), and Winogrande (Sakaguchi et al., 2020)... For the open-ended generation tasks, we apply SADI on Trivia QA (Joshi et al., 2017), Truthful QA (Lin et al., 2022), Toxi Gen (Hartvigsen et al., 2022) datasets.
Dataset Splits Yes Contrastive Pairs Construction For multiple-choice tasks, we generate positive prompts by concatenating questions with correct answers and generate negative prompts using a randomly chosen incorrect answer. ... The number of data used for identifying key elements and testing for 11 tasks. Task COPA ... # identify 500 # testset 500
Hardware Specification Yes Specifically, we employ the Adam W optimizer with a learning rate of 2 10 6 and a batch size of 4, conducting the fine-tuning across three epochs on four NVIDIA A-100 GPUs (80G).
Software Dependencies No The paper mentions using "Adam W optimizer" and pre-trained LLMs but does not provide specific version numbers for programming languages (e.g., Python), libraries (e.g., PyTorch, TensorFlow), or other software components that would be necessary for reproduction.
Experiment Setup Yes Specifically, we employ the Adam W optimizer with a learning rate of 2 10 6 and a batch size of 4, conducting the fine-tuning across three epochs on four NVIDIA A-100 GPUs (80G). ... Hyperparameters K and δ Our method introduces two key hyperparameters: K N+, specifying the number of top elements targeted during the intervention, and δ R+, controlling the strength of the intervention.