Learning Persistent Community Structures in Dynamic Networks via Topological Data Analysis
Authors: Dexu Kong, Anping Zhang, Yang Li
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, experiments on both synthetic data and real-world datasets are conducted to evaluate the performance and effectiveness of our algorithm. We compare the performance of our algorithm with the state-of-the-art algorithms. |
| Researcher Affiliation | Academia | Dexu Kong, Anping Zhang, Yang Li * Shenzhen Key Laboratory of Ubiquitous Data Enabling, Shenzhen International Graduate School, Tsinghua University kdx21@mails.tsinghua.edu.cn, zap21@mails.tsinghua.edu.cn, yangli@sz.tsinghua.edu.cn |
| Pseudocode | Yes | Algorithm 1: Optimizing clustering module in MFC. Algorithm 2: Complete Training Process. |
| Open Source Code | Yes | Code and data are available at the public git repository: https://github.com/kundtx/MFC-Topo Reg. |
| Open Datasets | Yes | We collected and processed four labeled dynamic network datasets without node features, including Enron, Highschool (Crawford and Milenkovi c 2018), DBLP, Cora (Hou et al. 2020). |
| Dataset Splits | No | The paper describes training processes on snapshots and mentions calculating various metrics, but it does not provide explicit percentages, counts, or other specific details for training, validation, and test dataset splits. |
| Hardware Specification | No | The paper mentions 'Limited by GPU memory, we only report Dyn AE results' but does not provide specific hardware details such as exact GPU or CPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper mentions various algorithms and models (e.g., GCN, GAT, DEC, DAEGC, MFC, k-means) and discusses their implementation within a deep learning framework, but it does not provide specific version numbers for software dependencies or libraries (e.g., Python, PyTorch, TensorFlow, scikit-learn). |
| Experiment Setup | Yes | The experiments in this paper follow the following settings: node embedding dimension is 30, the learning rate is 0.001. Each backbone method uses one GNN layer (GCN, GAT). The other baseline models are set as default by the authors. The training is divided into two stages, α = 10, β = 0 when training the backbone model and α = 1, β = 10 when training the Topo Reg, 500 epochs of each to ensure convergence. All weights in neural networks are initialized with Glorot initialization (Glorot and Bengio 2010). |