Improving Local Search Algorithms via Probabilistic Configuration Checking

Authors: Weilin Luo, Rongzhen Ye, Hai Wan, Shaowei Cai, Biqing Fang, Delong Zhang10283-10290

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results have confirmed that the PC can improve existing local search algorithms in two classic COPs, i.e., MWDS and MWCP, due to alleviating the cycling problem.
Researcher Affiliation Academia 1 School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China 2 Key Laboratory of Machine Intelligence and Advanced Computing (Sun Yat-sen University), Ministry of Education, China 3 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, China
Pseudocode Yes Algorithm 1: CC2FS(G, cutoff) Input: a weighted graph G = (V, E, W), the cutoff time. Output: a minimum dominating set of G. 1: S := Init Greedy Construction() and S := S; ... Algorithm 2: LSCC(G, cutoff) Input: a weighted graph G = (V, E, W), the cutoff time, the search depth L. Output: a maximum weight clique of G. 1: C := ; ... Algorithm 3: Select Conf(v, N1(v), N2(v), N>2(v), P) Input: a vertex v and N1(v), N2(v), N>2(v), P = {p1, p2, p3} a set of probabilities. Output: the configuration Conf. 1: Q1 randomly choose |N1(v)| p1 vertices from N1(v); ...
Open Source Code No The paper states that algorithms are implemented but does not provide any link or explicit statement about making the source code open or publicly available.
Open Datasets Yes We carry out experiments to evaluate the performance of the algorithms using PC for MWDS and MWCP on two standard benchmarks, i.e., DIMACS (37 instances) and BHOSLIB (41 instances). DIMACS benchmark is from the Second DIMACS Implementation Challenge (Johnson and Trick 1996)... And we also adopt T1 (530 instances), T2 (530 instances), and UDG (120 instances) from (Jovanovic, Tuba, and Simian 2010) for MWDS problem.
Dataset Splits No The paper mentions selecting a 'training set' but does not provide specific details on how the datasets are split into training, validation, or test sets (e.g., percentages, counts, or k-fold cross-validation).
Hardware Specification Yes All algorithms are implemented in C++ and compiled by g++ with -O3 option. All experiments are run on a Linux computer with an Intel i7-9800x processor with 3.6 GHz and 64 GB RAM.
Software Dependencies No The paper mentions 'g++ with -O3 option' for compilation but does not provide specific version numbers for g++ or any other software dependencies crucial for replication.
Experiment Setup Yes The cutoff time of all instances is 1000s except that the cutoff time of instances with the number of vertices less than 500 is 50s in T1, T2, and UDG benchmark.