Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Treewidth-Aware Complexity in ASP: Not all Positive Cycles are Equally Hard
Authors: Jorge Fandinno, Markus Hecher6312-6320
AAAI 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we refine this recent result and show that consistency for ASP can be decided in exponential time in k log(ι) where ι is a novel measure, bounded by both treewidth k and the size ℓof the largest strongly-connected component of the positive dependency graph of the program. We provide a treewidth-aware reduction from ASP to SAT that adheres to the above limit. We present a treewidth-aware reduction, capable of increasing tightness at the cost of higher treewidth, which increases up to O(k log(ι)). Further, we establish an algorithm for consistency, running in time 2O(k log(ι)) poly(n). |
| Researcher Affiliation | Academia | Jorge Fandinno1,2 and Markus Hecher2,3 1University of Nebraska Omaha, USA 2University of Potsdam, Germany 3TU Wien, Austria EMAIL, EMAIL |
| Pseudocode | Yes | The paper provides structured rules (e.g., 'Rules (1) (12)', 'Rules (13) (23)') that are formatted as step-by-step logical transformations and definitions, akin to an algorithm block, for example: '{x} for each x χ(t); see1 (1) B+ r , B r Hr for each r Πt (2)' |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository for the described methodology. It only mentions 'Currently, we are performing practical analysis of our provided reductions.' |
| Open Datasets | No | The paper is theoretical and focuses on complexity analysis and reductions for logic programs; it does not refer to any datasets or provide information about their public availability. |
| Dataset Splits | No | The paper is theoretical and does not describe any experimental setups or data splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not describe any hardware used for running experiments. |
| Software Dependencies | No | The paper is theoretical and does not provide specific software dependencies with version numbers for replication. |
| Experiment Setup | No | The paper is theoretical and does not describe any experimental setup details such as hyperparameters or system-level training settings. |