Link Prediction with Persistent Homology: An Interactive View
Authors: Zuoyu Yan, Tengfei Ma, Liangcai Gao, Zhi Tang, Chao Chen
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on on synthetic and real-world benchmarks. We compare with different SOTA link prediction baselines. Furthermore, we evaluate the efficiency of the proposed faster algorithm for extended persistent homology. |
| Researcher Affiliation | Collaboration | 1Wangxuan Institute of Computer Technology, Peking University, Beijing, China 2T. J. Watson Research Center, IBM, New York, USA 3Department of Biomedical Informatics, Stony Brook University, New York, USA. |
| Pseudocode | Yes | Algorithm 1 Matrix Reduction; Algorithm 2 A Faster Algorithm for Extended Persistence Diagram |
| Open Source Code | Yes | Source code is available at https://github.com/pkuyzy/TLC-GNN. |
| Open Datasets | Yes | We use a variety of datasets: (a) Pub Med (Sen et al., 2008) is a standard benchmark describing citation network. (b) Photo and Computers (Shchur et al., 2018) are graphs related to Amazon shopping records. (c) PPI networks are protein-protein interaction networks(Zitnik & Leskovec, 2017). |
| Dataset Splits | Yes | For all settings, we randomly split edges into 85/5/10% for training, validation, and test sets. |
| Hardware Specification | No | No specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running experiments were mentioned in the paper. |
| Software Dependencies | No | The paper mentions implementations in Python and Cython, and the use of the Dionysus package, but does not provide specific version numbers for these software components. For example, it states "Both implementations are written in python5" without specifying the Python version (e.g., Python 3.x). |
| Experiment Setup | No | The paper describes general aspects of the experimental setup, such as the use of an L-layer GCN and minimizing cross-entropy loss with negative sampling. It also mentions filter function definitions (hop-distance, Ollivier-Ricci curvature) and k-hop neighborhood choices (k=1 or k=2). However, concrete numerical hyperparameter values (e.g., learning rate, batch size, number of epochs, specific optimizer settings like Adam parameters) are not explicitly stated in the provided text. |