Two-goal Local Search and Inference Rules for Minimum Dominating Set
Authors: Shaowei Cai, Wenying Hou, Yiyuan Wang, Chuan Luo, Qingwei Lin
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Fast DS is evaluated on 4 standard benchmarks and 3 massive graphs benchmarks. Fast DS obtains the best performance for almost all benchmarks, and obtains better solutions than stateof-the-art algorithms on massive graphs. |
| Researcher Affiliation | Collaboration | 1State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, China 2School of Computer Science and Technology, University of Chinese Academy of Sciences, China 3School of Software and Microelectronics, Peking University, China 4School of Computer Science and Information Technology, Northeast Normal University, China 5Microsoft Research, China |
| Pseudocode | Yes | Algorithm 1: A New Local Search Framework for Min DS; Algorithm 2: Fast DS |
| Open Source Code | No | The codes of CC2FS, Fast MWDS and RLSo were kindly provided by their authors while the codes of Sc Bppw were open online6. Footnote 6: https://github.com/Fan-Yi/ |
| Open Datasets | Yes | UDG: This is a widely used standard benchmark for Min DS [Hedar and Ismail, 2010; Potluri and Singh, 2011; Potluri and Singh, 2013; Chalupa, 2018]. UDG contains 120 instances generated by the topology generator2. http://www.michelemastrogiovanni.net/software/download/ ; T1: This data set consists of 520 instances where each instance has two different weight functions [Jovanovic et al., 2010]. ; DIMACS: This benchmark is from the Second DIMACS Implementation Challenge (1992-1993)3. ftp://dimacs.rutgers.edu/pub/challenges ; BHOSLIB: This benchmark are generated based on the RB model [Xu et al., 2007] near the phase transition. ; SNAP: This benchmark is from Stanford Large Network Dataset Collection4. http://snap.stanford.edu/data ; DIMACS10: This benchmark is from the 10th DIMACS implementation challenge (2010)5. https://www.cc.gatech.edu/dimacs10/ ; Network Repository: The Network Data Repository [Rossi and Ahmed, 2015] collects massive graphs from various areas. |
| Dataset Splits | No | The paper does not specify distinct training, validation, and test dataset splits. It evaluates algorithms on entire benchmark instances. |
| Hardware Specification | Yes | All experiments are run on a server with Intel Xeon E5-2640 v4 2.40GHz with 128GB RAM under Cent OS 7.5. |
| Software Dependencies | No | Fast DS is implemented in C++ and complied by g++ with O3 option. This states the compiler and its option but not a specific version number for g++ or any other software dependencies. |
| Experiment Setup | Yes | All experiments are run on a server with Intel Xeon E5-2640 v4 2.40GHz with 128GB RAM under Cent OS 7.5. For each instance, all algorithms are executed 10 times with random seeds 1,2,3...10. The time limit of each run is 1000 seconds. The parameter α is set to 0.6. The first vertex u1 is selected from D according to a sampling heuristic named Best from Multiple Selections (BMS) [Cai, 2015], which randomly samples t vertices in D and picks the one with the maximum score to remove (t is suggested near 50 [Cai, 2015] and is set to 45 in this work). |