Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints
Authors: Divij Handa, Pavel Dolin, Shrinidhi Kumbhar, Tran Son, Chitta Baral
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We introduce a new diagnostic benchmark, ACTIONREASONINGBENCH, which encompasses 8 domains and includes questions for up to 19 action sequences. This benchmark rigorously evaluates LLMs across six key RAC dimensions... LLMs demonstrate average accuracy rates of 73.55%... Our evaluation of state-of-the-art LLMs, including both open-source and commercial models, reveals challenges across all RAC dimensions... |
| Researcher Affiliation | Academia | Divij Handa 1, Pavel Dolin 1, Shrinidhi Kumbhar 1, Tran Cao Son2, Chitta Baral1 1Arizona State University, 2New Mexico State University EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods and processes like the 'question generation pipeline' (Figure 1) and 'fine-tuning procedure' in paragraph text, but does not present any formal pseudocode or algorithm blocks with numbered steps or explicit 'Algorithm' labels. |
| Open Source Code | No | The paper evaluates existing open-source models like Llama-3.1-8B-Instruct and Llama-3.1-70B-Instruct, but does not provide a statement or link for the open-sourcing of the benchmark creation code or the experimental setup code developed by the authors. |
| Open Datasets | No | We introduce a new diagnostic benchmark, ACTIONREASONINGBENCH... The benchmark was divided into two parts: one for training and the other for testing the LLMs, ensuring a balanced representation of question categories across the 8 domains. However, the paper does not provide explicit access information (link, DOI, repository) for the ACTIONREASONINGBENCH dataset itself. |
| Dataset Splits | Yes | The benchmark was divided into two parts: one for training and the other for testing the LLMs... Table 2 provides an overview of the distribution of questions and their corresponding categories across both the training and testing sets. The test set contains 3,498 questions, including 2,195 binary and 1,303 free-answer questions. |
| Hardware Specification | Yes | All experiments were executed using 8 H100 GPUs. |
| Software Dependencies | No | The paper mentions using the 'Adam W optimizer' and fine-tuning 'Llama-3.1-8B model', but does not provide specific version numbers for software libraries, programming languages (e.g., Python), or frameworks (e.g., PyTorch, TensorFlow). |
| Experiment Setup | Yes | We fine-tuned Llama-3.1-8B separately for binary (true/false) and free answer questions, using 6 epochs for the former and 18 epochs for the latter. The Adam W optimizer was used, with a batch size of 4 and gradient accumulation steps set to 8 for both of the training setups. |