Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
An Empirical Analysis of Uncertainty in Large Language Model Evaluations
Authors: Qiujie Xie, Qingqiu Li, Zhuohao Yu, Yuejie Zhang, Yue Zhang, Linyi Yang
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we conduct extensive experiments involving 9 widely used LLM evaluators across 2 different evaluation settings to investigate the uncertainty in model-based LLM evaluations. |
| Researcher Affiliation | Collaboration | 1Zhejiang University 2School of Engineering, Westlake University 3School of Computer Science, Shanghai Key Lab of Intelligent Information Processing, Shanghai Collaborative Innovation Center of Intelligent Visual Computing, Fudan University 4Peking University 5Westlake Institute for Advanced Study 6University College London 7Huawei Noah s Ark Lab |
| Pseudocode | No | The paper describes methods and processes in natural language and refers to prompting strategies and fine-tuning. It does not contain explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and data are released at: https: //github.com/hasaki Xie123/LLM-Evaluator-Uncertainty. |
| Open Datasets | Yes | on the MTBench (Zheng et al., 2023) and Panda LM (Wang et al., 2024b) test sets... using instruction instances collected from the Alpaca 52K dataset (Taori et al., 2023). |
| Dataset Splits | Yes | Ultimately, we obtain a fine-tuning set containing 694 high-quality instances and an OOD test set with 220 diverse instances. |
| Hardware Specification | Yes | The model is fine-tuned for 6 epochs on 2 NVIDIA A100-SXM4-80GB GPUs. |
| Software Dependencies | No | The paper mentions using Adam W optimizer and various LLM models, but does not provide specific version numbers for software libraries or dependencies used for implementation. |
| Experiment Setup | Yes | During the fine-tuning phase of Confi LM, we use the Adam W (Loshchilov, 2017) optimizer with a learning rate of 5e-5 and a cosine learning rate scheduler. The model is fine-tuned for 6 epochs on 2 NVIDIA A100-SXM4-80GB GPUs. |