An Exact Algorithm for Maximum k-Plexes in Massive Graphs
Authors: Jian Gao, Jiejiang Chen, Minghao Yin, Rong Chen, Yiyuan Wang
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform intensive experiments to evaluate our algorithm, and show that the proposed strategies are effective and our algorithm outperforms state-of-the-art algorithms, especially for real-world massive graphs. |
| Researcher Affiliation | Academia | Jian Gao1,2, Jiejiang Chen2, Minghao Yin2 , Rong Chen1 and Yiyuan Wang2 1 College of Information Science and Technology, Dalian Maritime University, China 2 School of Computer Science and Information Technology, Northeast Normal University, China |
| Pseudocode | Yes | Algorithm 1: function vertex-reduction(G, LB, k)... Algorithm 6: The Branch-and-Bound Algorithm (Bn B) |
| Open Source Code | No | The paper states the algorithms were implemented in C++ but does not provide any link or explicit statement about making the source code available. |
| Open Datasets | Yes | Intensive experiments are performed on real-world graphs from Network Data Repository online [Rossi and Ahmed, 2015] and DIMACS benchmarks. ... 1available at: http://lcs.ios.ac.cn/ caisw/Resource/realworld%20graphs.tar.gz |
| Dataset Splits | No | The paper discusses evaluating on benchmark datasets like DIMACS and Network Data Repository graphs, but does not provide specific training, validation, or test splits for these datasets. |
| Hardware Specification | Yes | Benchmarks were solved parallel on a workstation with an Intel Xeon E5-1650v4 (3.6GHz) CPU, and 16GB RAM, running Ubuntu Linux 16.04, with the cutoff time 10000s. |
| Software Dependencies | Yes | Our algorithms were implemented in C++ language and compiled by g++ version 4.7 with -O3 option. |
| Experiment Setup | No | The paper describes the algorithm’s components and overall experimental environment (e.g., cutoff time), but does not specify granular experimental setup details such as hyperparameters (e.g., learning rate, batch size) or specific training configuration settings. |