Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Automated Conjecturing II: Chomp and Reasoned Game Play
Authors: Alexander Bradford, J. Kain Day, Laura Hutchinson, Bryan Kaperick, Craig E. Larson, Matthew Mills, David Muncy, Nico Van Cleemput
JAIR 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We demonstrate the use of a program that generates conjectures about positions of the combinatorial game Chomp explanations of why certain moves are bad. These could be used in the design of a Chomp-playing program that gives reasons for its moves. We prove one of these Chomp conjectures demonstrating that our conjecturing program can produce genuine Chomp knowledge. We include a proof of one of these generated statements as a demonstration of the potential high-quality of the program s output. |
| Researcher Affiliation | Academia | Department of Mathematics and Applied Mathematics Virginia Commonwealth University Richmond, VA 23284, USA; Department of Applied Mathematics, Computer Science and Statistics Ghent University 9000 Ghent, Belgium |
| Pseudocode | No | The paper describes steps for generating conjectures in numbered list format (1. Produce a stream of inequalities..., 2. Initialize an initial collection..., 3. Generate conjectures..., 4. Remove insignificant conjectures.) but these are descriptive text, not structured pseudocode or algorithm blocks. There are Python code snippets, but they are examples, not a full algorithm. |
| Open Source Code | Yes | Our program is open-source, and operates in Sage (a free and growing mathematical computing environment, similar to Maple, Matlab and Mathematica). The program, examples, and set-up instructions are available at: http://nvcleemp.github.io/conjecturing/ |
| Open Datasets | No | The paper mentions using 'a small database of both N-positions and P-positions' and 'Conjecturing was never given more than 60 examples total to use in any run of conjecture generation.' These are examples or a small database created by the authors for their work, not a publicly available dataset with concrete access information (link, DOI, specific citation for access). |
| Dataset Splits | No | The paper uses a small number of 'example game positions' and 'stored objects' for its conjecturing program, but does not describe any formal training/test/validation dataset splits typically associated with machine learning experiments. |
| Hardware Specification | Yes | Our C-language expression generator can generate more than 100 million expressions per second, depending on the complexity of the expressions, on a standard 2018 laptop with 8 GB RAM and a 2.6 GHz core). |
| Software Dependencies | No | The paper mentions 'Sage', 'Python', and a 'C-language expression generator', but does not provide specific version numbers for any of these software components or libraries. |
| Experiment Setup | No | The paper describes the general approach and methods used by the conjecturing program, including the Dalmatian heuristic and invariant generation. However, it does not provide specific experimental setup details such as hyperparameters, training configurations, or system-level settings typically found in empirical studies. |