A SAT-based Resolution of Lam’s Problem
Authors: Curtis Bright, Kevin K. H. Cheung, Brett Stevens, Ilias Kotsireas, Vijay Ganesh3669-3676
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we resolve Lam s problem by translating the problem into Boolean logic and use satisfiability (SAT) solvers to produce nonexistence certificates that can be verified by a third party. Our work uncovered consistency issues in both previous searches highlighting the difficulty of relying on special-purpose search code for nonexistence results. |
| Researcher Affiliation | Academia | Curtis Bright,1,2 Kevin K. H. Cheung,2 Brett Stevens,2 Ilias Kotsireas,3 Vijay Ganesh4 1University of Windsor, 2Carleton University, 3Wilfrid Laurier University, 4University of Waterloo |
| Pseudocode | No | The paper describes the SAT encoding process and various algorithms used for symmetry removal and block selection in prose, but it does not include any clearly labeled pseudocode or algorithm blocks with structured steps formatted like code. |
| Open Source Code | Yes | Our SAT instances are generated by Python scripts that are freely available as a part of the Math Check project. The code, along with Bash scripts to generate and check the certificates, is available from uwaterloo.ca/mathcheck. |
| Open Datasets | No | The paper does not use a traditional dataset in the context of train/validation/test splits. It focuses on generating and solving SAT instances derived from a mathematical problem (Lam's problem) and verifying non-existence, rather than training models on pre-existing data. Therefore, no public dataset access information is relevant or provided. |
| Dataset Splits | No | The paper does not utilize or describe traditional dataset splits (e.g., train, validation, test) because its research methodology involves generating and solving SAT instances for a mathematical proof, not training or validating a machine learning model on a dataset. |
| Hardware Specification | Yes | The computations were performed on a cluster of Intel E5 cores at 2.1 GHz running Linux and using at most 4GB of memory. |
| Software Dependencies | No | The paper mentions several software tools like Maple SAT, GRATgen, CADICAL, March_cu, Traces, and DRAT-trim, but it does not specify concrete version numbers for these software components. It only provides the year of their associated publications. |
| Experiment Setup | Yes | The cutoff bound was experimentally chosen by randomly selecting up to several hundred instances from each case and determining a bound that minimizes the sum of the cubing and conquering times. |