Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Invariant Graph Propagation in Constraint-Based Local Search
Authors: Frej Knutar Lewander, Pierre Flener, Justin Pearson
JAIR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | the measuring of the throughput (number of probes per second) of the algorithms for various invariant graphs, validating our recommendations (Section 4). We developed a CBLS solver, called Atlantis, that can propagate an invariant graph in both input-to-output and output-to-input styles. For the output-to-input style, the solver supports the total, ad-hoc, and prepared marking strategies. We ran our experiments on a desktop computer with an ASUS PRIME Z590-P motherboard, a 3.5 GHz Intel Core i9 11900K processor, and four 16 GB 3200 MT/s DDR4 memories, running Ubuntu 22.04.4 LTS with GCC (the GNU Compiler Collection) 11. The results are shown in Figure 9. |
| Researcher Affiliation | Academia | FREJ KNUTAR LEWANDER , PIERRE FLENER, and JUSTIN PEARSON, Uppsala University, Sweden. Authors Contact Information: ... Uppsala University, Department of Information Technology, Uppsala, Sweden. |
| Pseudocode | Yes | Algorithm 1: The propagation of an invariant graph in output-to-input style. Algorithm 2: The propagation of an invariant graph in input-to-output style. |
| Open Source Code | Yes | The source code of Atlantis is publicly available at https://github.com/astra-uu-se/atlantis/ |
| Open Datasets | No | The difficulty and realism of the problem instances are thus unimportant, so we generated random parameter values instead of retrieving instances from existing repositories. |
| Dataset Splits | No | For each invariant graph model, we generated 9 instances, of sizes 16, 32, 64, 96, 128, 196, 256, 512, and 1024. The paper does not mention any training/test/validation splits, as it focuses on performance measurement rather than machine learning model evaluation. |
| Hardware Specification | Yes | We ran our experiments on a desktop computer with an ASUS PRIME Z590-P motherboard, a 3.5 GHz Intel Core i9 11900K processor, and four 16 GB 3200 MT/s DDR4 memories, running Ubuntu 22.04.4 LTS with GCC (the GNU Compiler Collection) 11. |
| Software Dependencies | Yes | running Ubuntu 22.04.4 LTS with GCC (the GNU Compiler Collection) 11. |
| Experiment Setup | Yes | For each invariant graph model, we generated 9 instances, of sizes 16, 32, 64, 96, 128, 196, 256, 512, and 1024. The throughput is measured for each instance when the input-to-output propagation style (denoted input-to-output ), output-to-input propagation style with ad-hoc marking (denoted output-to-input ad-hoc ), output-to-input propagation style with prepared marking (denoted output-to-input prepared ), and output-to-input propagation style with total marking (denoted output-to-input total ) are used. |