Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

BLADE: Enhancing Black-Box Large Language Models with Small Domain-Specific Models

Authors: Haitao Li, Qingyao Ai, Jia Chen, Qian Dong, Zhijing Wu, Yiqun Liu

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments, we verify the effectiveness of BLADE on diverse LLMs and datasets across different domains. This shows the potential of BLADE as an effective and cost-efficient solution in adapting general LLMs for vertical domains. We conduct extensive experiments on two widely used datasets in the legal and medical domains. Table 1 presents the results from the baselines and BLADE on the JEC-QA dataset. Table2 shows the performance of BLADE on the medical domain dataset MLEC-QA. Ablation Studies
Researcher Affiliation Collaboration 1Department of Computer Science and Technology, Tsinghua University, Beijing, China 2 Institute for Internet Judiciary, Tsinghua University, Beijing, China 3 Xiaohongshu Inc 4 School of Computer Science and Technology, Beijing Institute of Technology
Pseudocode No The paper includes figures illustrating workflows (Figure 1, Figure 2, Figure 3) and mathematical formulations, but no explicit pseudocode or algorithm blocks with structured steps labeled as such.
Open Source Code Yes Code https://github.com/CSHaitao/BLADE
Open Datasets Yes JEC-QA (Zhong et al. 2020) is the largest Chinese multiple-choice dataset in the legal domain. MLEC-QA (Li, Zhong, and Chen 2021) is the multi-choice biomedical QA dataset.
Dataset Splits Yes JEC-QA (Zhong et al. 2020) is the largest Chinese multiple-choice dataset in the legal domain. The legal questions in JEC-QA require high-level reasoning ability and are divided into two types: Knowledge-Driven Questions (KD-questions) and Case-Analysis Questions (CA-questions). There are 26,365 questions in JEC-QA, of which 5,289 of them comprising the test set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or specific computer specifications) used for running the experiments.
Software Dependencies No The paper mentions using BLOOMZ as the small LM, but does not list specific software libraries or frameworks with their version numbers (e.g., Python, PyTorch, CUDA versions) that are critical for reproducibility.
Experiment Setup No The paper describes the methodologies like Bayesian Optimization and CMA-ES. It states BLOOMZ 1b7 parameters were used for the small LM and that some experiments used the 'same training parameters'. However, it does not provide concrete hyperparameter values (e.g., learning rate, batch size, number of epochs, specific optimizer settings, or convergence thresholds) in the main text.