Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Designing Fast Absorbing Markov Chains
Authors: Stefano Ermon, Carla Gomes, Ashish Sabharwal, Bart Selman
AAAI 2014 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate the above approach, we generated 250 satisfiable instances from the random k-SAT ensemble (Selman, Mitchell, and Levesque 1996) for k = 3, 4. For each individual instance, we optimized the expected runtime (2) for a uniform initial distribution πu as a function of the parameter θ [0, 1]m m.When we compare with the expected absorption time achieved by setting optimally the only parameter T in the transition probabilities defined as in (7), we get improvements ranging from 18% to 49% for 3-SAT instances, and ranging from 18% to 81% for the harder 4-SAT instances (optimizing over MC with 27 to 213 states). A scatter plot for the runtime improvement for several instances is reported in Figure 2 (left). |
| Researcher Affiliation | Collaboration | Stefano Ermon, Carla P. Gomes Department of Computer Science Cornell University, Ithaca, USA EMAIL Ashish Sabharwal IBM Watson Research Center Yorktown Heights, NY, USA EMAIL Bart Selman Department of Computer Science Cornell University, Ithaca, USA EMAIL |
| Pseudocode | No | The paper refers to an existing algorithm ('L-BFGS-B algorithm proposed by Byrd et al. (1995)') but does not provide its own pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide any statement or link regarding the availability of its source code. |
| Open Datasets | No | The paper states, 'To evaluate the above approach, we generated 250 satisfiable instances from the random k-SAT ensemble'. It does not refer to a publicly available dataset with concrete access information (link, DOI, formal citation). |
| Dataset Splits | No | The paper mentions generating instances and optimizing for them but does not explicitly detail training, validation, or test splits. The evaluation is done 'for each individual instance' or 'for an ensemble'. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using 'L-BFGS-B algorithm proposed by Byrd et al. (1995)' but does not specify a version number for this or any other software dependency. |
| Experiment Setup | Yes | We focus on a parameterized local search scheme based on energy as the only feature (and only allowing moves of up to Hamming distance 1). Specifically, we have a parameter θ [0, 1]m m. The transition probabilities are given by (8) for i = j, and pii = 1 P j pij . This corresponds to a sampling scheme where from state i (with energy Ei) we select uniformly at random a neighbor j at Hamming distance 1, and accept the transition to j with probability θEi,Ej . |