TopoSRL: Topology preserving self-supervised Simplicial Representation Learning
Authors: Hiren Madhu, Sundeep Prabhakar Chepuri
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate the superior performance of Topo SRL compared to state-of-the-art graph SSL techniques and supervised simplicial neural models across various datasets corroborating the efficacy of Topo SRL in processing simplicial complex data in a self-supervised setting. We conduct experiments demonstrating our proposed method on downstream tasks such as node classification, simplicial closure, graph classification, and trajectory prediction. We also highlight the effectiveness of Topo SRL in learning expressive representations for simplicial complexes with the proposed augmentation technique as opposed to the random augmentation technique. Experiments show that without any complex architectures or expensive augmentation techniques, our method outperforms existing state-of-the-art graph representation learning methods while being competitive with supervised simplicial representation learning methods. 5 Experiments |
| Researcher Affiliation | Academia | Hiren Madhu and Sundeep Prabhakar Chepuri Indian Insitute of Science hirenmadhu, spchepuri@iisc.ac.in |
| Pseudocode | Yes | Algorithm 1 Simplicial complex augmentation ... Algorithm 2 Topo SRL |
| Open Source Code | Yes | The code is available at https://github.com/Hiren Madhu/Topo SRL. |
| Open Datasets | Yes | We perform node classification on the following publicly available datasets4, namely, contact-primary-school, contact-high-school, senate-bills. We use classification accuracy as the performance metric. ... 4Dataset details are presented in the supplementary material. Datasets are available at https://www.cs. cornell.edu/~arb/data/. ... We evaluate Topo SRL s performance in graph classification task on PROTEINS, NCI1, REDDIT-B, REDDIT-M and IMDB-B datasets from the TUDatasets [23] repository. |
| Dataset Splits | Yes | We first split the data across time on these temporal datasets and then train the encoder on the first 80% of the data. The last 20% is used for inference. ... Use only a partially labeled (e.g., 20% train and 80% test, 40% train and 60% test, etc.) data to train a logistic regression classifier for node classification task in the contact-high-school dataset. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models) used for running the experiments. It only mentions general setups implicitly through performance. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | Details about the experimental setup, hyperparameters, and results with other SNN encoder models are available in the supplementary material. |