LLM Evaluators Recognize and Favor Their Own Generations

Authors: Arjun Panickssery, Samuel Bowman, Shi Feng

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our main findings are as follows:" and "We base our experiments on 2,000 randomly sampled news articles from two datasets: XSUM (Narayan et al., 2018) and CNN/Daily Mail (Nallapati et al., 2016)
Researcher Affiliation Collaboration Arjun Panickssery1 Samuel R. Bowman2 Shi Feng3 1MATS 2New York University, Anthropic PBC 3George Washington University arjun.panickssery@gmail.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code for evaluating GPT-4, GPT-3.5, and Llama 2, as well as for fine-tuning Llama 2, is available at https://bit.ly/llm_self_recognition.
Open Datasets Yes We base our experiments on 2,000 randomly sampled news articles from two datasets: XSUM (Narayan et al., 2018) and CNN/Daily Mail (Nallapati et al., 2016)
Dataset Splits No The paper mentions '500 training articles' and 'The remaining 500 articles and associated summaries are used for evaluation' but does not explicitly define a separate validation split or its purpose.
Hardware Specification No Section 3.1 mentions 'The Llama models are quantized to 8 bits and fine-tuned for one epoch', but does not specify the type of hardware (e.g., GPU/CPU models, memory) used for these experiments.
Software Dependencies No The paper mentions using specific LLMs like 'Llama-2-7b-chat' and 'GPT-3.5' and 'GPT-4', and 'Adam optimization', but does not provide specific version numbers for underlying software frameworks or libraries (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes The evaluators are trained to predict the final token, representing the correct choice among two options, using supervised learning with cross-entropy loss. ... The Llama models are quantized to 8 bits and fine-tuned for one epoch using Adam optimization and a learning rate of 5.0 × 10−5.