Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
On Super Strong ETH
Authors: Nikhil Vyas, Ryan Williams
JAIR 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We give a randomized algorithm which refutes the Super-Strong ETH for the case of random k-SAT and planted k-SAT for any clause-to-variable ratio. For example, given any random k-SAT instance F with n variables and m clauses, our algorithm decides satisfiability for F in 2n(1 Ω(log k)/k) time, with high probability (over the choice of the formula and the randomness of the algorithm). It turns out that a well-known algorithm from the literature on SAT algorithms does the job: the PPZ algorithm of Paturi, Pudlak, and Zane (1999). The paper primarily focuses on algorithm design, theoretical analysis, complexity bounds, and proofs (e.g., Theorem 1, 2, 3, 4, Lemma 6, 7, 8, 9, 10, 11), without presenting empirical experimental results from data analysis. |
| Researcher Affiliation | Academia | Nikhil Vyas EMAIL Ryan Williams EMAIL CSAIL, MIT Cambridge, MA 02139 USA. Both authors are affiliated with MIT, a university, and have .edu email addresses. |
| Pseudocode | Yes | Algorithm 1 Algorithm for planted k-SAT 1: procedure Simple-PPZ(F) 3: while i n do 4: if there is a unit clause C in the formula then 5: Assign the variable in C so that C is true 6: else if xi is unassigned then 7: Assign xi randomly. Set i i + 1 9: Set i i + 1 10: Output the assignment if it satisfies F. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code, nor does it provide links to code repositories or supplementary materials containing code. |
| Open Datasets | No | The paper focuses on 'random k-SAT' and 'planted k-SAT' instances, which are problem distributions generated according to specified rules (e.g., 'each of the m clauses is drawn uniformly and independently'). It does not use or provide access information for any pre-existing public datasets. |
| Dataset Splits | No | The paper deals with theoretically generated k-SAT instances rather than pre-existing datasets. Therefore, the concept of training/test/validation splits in the typical empirical sense is not applicable, and no such information is provided. |
| Hardware Specification | No | The paper is theoretical in nature, focusing on algorithmic complexity and proofs. It does not describe any empirical experiments or the specific hardware used to run them. |
| Software Dependencies | No | The paper describes theoretical algorithms (e.g., PPZ algorithm) and their complexity. It does not mention any specific software implementations or their version numbers that would constitute software dependencies for replication. |
| Experiment Setup | No | As a theoretical paper, it does not describe an experimental setup with hyperparameters, optimizer settings, or other system-level training configurations, as no empirical experiments are conducted. |