Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Law of the Weakest Link: Cross Capabilities of Large Language Models

Authors: Ming Zhong, Aston Zhang, Xuewei Wang, Rui Hou, Wenhan Xiong, Chenguang Zhu, Zhengxing Chen, Liang Tan, Chloe Bi, Mike Lewis, Sravya Popuri, SHARAN NARANG, Melanie Kambadur, Dhruv Mahajan, Sergey Edunov, Jiawei Han, Laurens van der Maaten

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our findings reveal that current LLMs consistently exhibit the Law of the Weakest Link, where cross-capability performance is significantly constrained by the weakest component. Across 58 cross-capability scores from 17 models, 38 scores are lower than all individual capabilities, while 20 fall between strong and weak, but closer to the weaker ability.
Researcher Affiliation Collaboration 1Llama Team, AI @ Meta 2University of Illinois Urbana-Champaign
Pseudocode Yes To explore the impact of altering individual capabilities without significantly affecting others, we propose a principle-based method that iteratively refines the system prompt to selectively boost specific capabilities of LLMs, based on the responses and evaluations on CROSSEVAL. This approach allows for controlled investigation into cross-capability performance dynamics. Our method iteratively refines the prompt through operations such as adding, replacing, revising, or keeping principles. After 100 iterations, GPT-4o generates a tailored principle-based prompt to guide the LLM in prioritizing key performance aspects like format adherence, problem-solving strategies, or error avoidance. Detailed prompts used are available in Table 41 in Appendix F.1.
Open Source Code Yes The code, benchmarks, and evaluations are available on our project website.
Open Datasets Yes Building on these definitions, we introduce CROSSEVAL, a benchmark comprising 1,400 human-annotated prompts, with 100 prompts for each individual and cross capability. [...] The code, benchmarks, and evaluations are available on our project website.
Dataset Splits Yes This process ensures the difficulty distribution follows the standards used in Llama 3 s human evaluations, with 10% easy, 30% medium, and 60% hard prompts (Llama Team, 2024).
Hardware Specification No No specific hardware (GPU/CPU models, memory) used for running the experiments is explicitly mentioned in the paper. It refers to specific LLM models (e.g., Llama 3.1 405B FP8 version) but not the underlying physical hardware.
Software Dependencies No The paper mentions various LLM models (e.g., GPT-4o mini, Llama 3.1 405B, Claude 3.5 Sonnet) that were either evaluated or used as evaluators. However, it does not provide specific software dependencies like programming languages, libraries, or frameworks with version numbers that were used to conduct the research or experiments.
Experiment Setup Yes For consistency, we use the GPT-4o-05-13 model as the evaluator, with temperature set to 0 and seed set to 42 to ensure deterministic scoring. Each model s responses are generated using their default decoding parameters to achieve optimal performance.