Solving Hard Subgraph Problems in Parallel

Authors: Ciaran McCreesh

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results suggest that it is because this strategy tends to branch on small colour classes first, which is similar to the widely-used smallest domain first heuristic (and this contradicts the explanation given previously in the literature). This knowledge allowed us to design a better branching strategy, which explicitly considered the sizes of colour classes. For subgraph isomorphism, we observed that many existing benchmark suites contain large numbers of very easy satisfiable instances. We have shown how to generate really hard random instances for subgraph isomorphism problems [Mc Creesh et al., 2016], and explained how this lets us design better variable and value ordering heuristics. Our results indicated that different instances would benefit from different strengths of filtering. We demonstrated that it is possible to use using machine learning to select, on an instance-by-instance basis, a good choice from a portfolio of subgraph isomorphism algorithms [Kotthoff et al., 2016].
Researcher Affiliation Academia Ciaran Mc Creesh University of Glasgow, Glasgow, Scotland c.mccreesh.1@research.gla.ac.uk
Pseudocode No The paper describes algorithms and strategies verbally and conceptually but does not include any formal pseudocode blocks or clearly labeled algorithm listings.
Open Source Code No The paper does not contain any explicit statements about making its source code available, nor does it provide links to a code repository.
Open Datasets No The paper mentions 'many existing benchmark suites' and generating 'really hard random instances' but does not provide concrete access information (e.g., specific links, DOIs, or formal citations) for the datasets it used or generated.
Dataset Splits No The paper does not specify any training/validation/test dataset splits (e.g., percentages, sample counts, or specific split methods) that would be needed for reproduction.
Hardware Specification No The paper mentions 'Modern hardware' and 'Current CPUs have several (or dozens of) processing cores' but does not provide specific details on the hardware used for experiments (e.g., CPU models, GPU models, memory).
Software Dependencies No The paper describes various algorithmic components and techniques (e.g., 'bit-parallel filtering', 'all-different propagator', 'backjumping'), but it does not provide specific software names with version numbers for reproducibility (e.g., 'Python 3.8', 'CPLEX 12.4').
Experiment Setup No The paper describes various heuristics and strategies (e.g., 'variable and value ordering heuristics', 'inference strategies', 'backtracking search', 'thread-parallelism'), but it does not provide specific experimental setup details such as hyperparameter values, learning rates, batch sizes, or model initialization settings.