Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

BalancEdit: Dynamically Balancing the Generality-Locality Trade-off in Multi-modal Model Editing

Authors: Dongliang Guo, Mengxuan Hu, Zihan Guan, Thomas Hartvigsen, Sheng Li

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comprehensive results confirm the effectiveness of Balanc Edit, demonstrating minimal trade-offs while maintaining robust editing capabilities. Our code and dataset are available at https://github.com/donglgcn/ Balanc Edit/tree/MMOKVQA.
Researcher Affiliation Academia 1University of Virginia, Charlottesville, USA. Correspondence to: Sheng Li <EMAIL>.
Pseudocode No The paper describes the method using mathematical formulations and descriptive text, but it does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Our code and dataset are available at https://github.com/donglgcn/ Balanc Edit/tree/MMOKVQA.
Open Datasets Yes Our code and dataset are available at https://github.com/donglgcn/ Balanc Edit/tree/MMOKVQA. ... 1) MMEDIT(Cheng et al., 2023), the first multi-modal model editing dataset ... 2) We introduce a new dataset, OKEDIT, based on the OKVQA dataset (Marino et al., 2019)
Dataset Splits Yes Table 2. Statistics comparison between MMEDIT and our OKEDIT. MMEDIT: Train 6036, Test 2093. OKEDIT: Train 9009, Test 5046.
Hardware Specification Yes We trained all methods using a variety of GPUs, including 24GB NVIDIA RTX A5000s, 40GB NVIDIA A100s, and 80GB NVIDIA A100s. Timing experiments are reported from experiments performed on an NVIDIA RTX A100 GPU.
Software Dependencies No The paper mentions using the Adam optimizer (Diederik, 2014) but does not provide specific version numbers for software libraries or frameworks like Python, PyTorch, or TensorFlow.
Experiment Setup Yes In our comparisons of Finetuning, MEND and GRACE, we explore learning rates of 1.0, 1e 1, 1e 2, 1e 3, 1e 4, and 1e 5. We observe that Finetuning, Memory, and MEND perform best with 1e 2. ... The batch size is consistently 1. ... We select α using a small held-out set of only 5 unrelated samples. ... α=0.2 for Mini GPT-4 for all evaluations.