LLMEval: A Preliminary Study on How to Evaluate Large Language Models

Authors: Yue Zhang, Ming Zhang, Haipeng Yuan, Shichun Liu, Yongyao Shi, Tao Gui, Qi Zhang, Xuanjing Huang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we analyze evaluation methods by comparing various criteria with both manual and automatic evaluation, utilizing onsite, crowd-sourcing, public annotators and GPT4, with different scoring methods and ranking systems. We propose a new dataset, LLMEval and conduct evaluations on 20 LLMs. A total of 2,186 individuals participated, leading to the generation of 243,337 manual annotations and 57,511 automatic evaluation results. We perform comparisons and analyses of different settings and conduct 10 conclusions that can provide some insights for evaluating LLM in the future.
Researcher Affiliation Academia 1 School of Computer Science, Fudan University, Shanghai, China 2 Institute of Modern Languages and Linguistics, Fudan University, Shanghai, China 3 Shanghai Advanced Institute of Finance, Shanghai Jiaotong University, Shanghai, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper states:
Open Datasets Yes We propose a new dataset, LLMEval and conduct evaluations on 20 LLMs... The dataset and the results are publicly available at https://github.com/llmeval.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or testing their own models. It describes creating datasets for evaluation and internal consistency checks, but not traditional splits.
Hardware Specification No The paper mentions consuming
Software Dependencies No The paper mentions using
Experiment Setup No The paper does not contain specific experimental setup details, such as concrete hyperparameter values or training configurations for models developed or tuned by the authors. The paper focuses on evaluating existing LLMs.