Unveiling the Pitfalls of Knowledge Editing for Large Language Models

Authors: Zhoubo Li, Ningyu Zhang, Yunzhi Yao, Mengru Wang, Xi Chen, Huajun Chen

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To achieve this, we introduce new benchmark datasets and propose innovative evaluation metrics. Our results underline two pivotal concerns: (1) Knowledge Conflict: Editing groups of facts that logically clash can magnify the inherent inconsistencies in LLMs a facet neglected by previous methods. (2) Knowledge Distortion: Altering parameters with the aim of editing factual knowledge can irrevocably warp the innate knowledge structure of LLMs. Experimental results vividly demonstrate that knowledge editing might inadvertently cast a shadow of unintended consequences on LLMs, which warrant attention and efforts for future works.
Researcher Affiliation Collaboration Zhoubo Li1,2, Ningyu Zhang1,2 , Yunzhi Yao1,2, Mengru Wang1,2, Xi Chen4, Huajun Chen1,2,3 1College of Computer Science and Technology, Zhejiang University 2ZJU-Ant Group Joint Research Center for Knowledge Graphs, Zhejiang University 3ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University 4Tencent
Pseudocode No No pseudocode or algorithm blocks are present in the paper.
Open Source Code Yes Code and data are available at https://github.com/zjunlp/Pitfalls Knowledge Editing.
Open Datasets Yes We construct our CONFLICTEDIT dataset from Wiki Data (Vrandecic & Kr otzsch, 2014), which consist of two types of edits, namely REVERSE EDIT and COMPOSITE EDIT, as defined in Equation 2 and Equation 4. ... We construct ROUNDEDIT dataset using data sourced from Wiki Data, enriched by some 1-to-n relations mined from it.
Dataset Splits No For training MEND, samples with locality facts are essential. Prioritizing training efficacy, we opt not to utilize the NQ dataset, as done by Zs RE. Instead, our sampling is anchored in COUNTERFACT (Meng et al., 2022).
Hardware Specification No We construct CONFLICTEDIT by sampling thousands of data of REVERSE EDIT and COMPOSITE EDIT edits respectively. In the COMPOSITE EDIT setup, we respectively utilize GPT2-XL (1.5B) (Radford et al., 2019) and GPT-J (6B) (Wang & Komatsuzaki, 2021) to confirm the correctness of the tied fact kf before experiments.
Software Dependencies No For basic Fine-Tuning (FT), we follow Meng et al. (2022) to re-implement their study, which uses Adam (Kingma & Ba, 2014) with early stopping to minimize log PG [o | p], changing only mlpproj weights at selected layer 1 in GPT2-XL and layer 21 in GPT-J. ... We directly apply the code and MLP weight provided by the original paper and keep the default setting for hyper-parameters.
Experiment Setup Yes For basic Fine-Tuning (FT), we follow Meng et al. (2022) to re-implement their study, which uses Adam (Kingma & Ba, 2014) with early stopping to minimize log PG [o | p], changing only mlpproj weights at selected layer 1 in GPT2-XL and layer 21 in GPT-J. For both models, all hyper-parameters follow default settings. To ensure fairness in the experiments, we always use the unconstrained fine-tuning approach.