From Latent Graph to Latent Topology Inference: Differentiable Cell Complex Module

Authors: Claudio Battiloro, Indro Spinelli, Lev Telyatnikov, Michael M. Bronstein, Simone Scardapane, Paolo Di Lorenzo

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our model is tested on several homophilic and heterophilic graph datasets and it is shown to outperform other state-of-the-art techniques, offering significant improvements especially in cases where an input graph is not provided. In this Section, we evaluate the effectiveness of the proposed framework on several heterophilic and homophilic graph benchmarks.
Researcher Affiliation Academia 1 Sapienza University of Rome, 2 Harvard University, 3 Oxford University.
Pseudocode No The paper does not contain explicit pseudocode or algorithm blocks.
Open Source Code Yes We provide all the code, data splits, and virtual environment needed to replicate the experiments at the following anonymized repository: https://github.com/spindro/differentiable_cell-complex_module.
Open Datasets Yes We follow the same core experimental setup of (de Ocáriz Borde et al., 2023) on transductive classification tasks; in particular, we first focus on standard graph datasets such as Cora, Cite Seer (Yang et al., 2016), Pub Med, Physics and CS (Shchur et al., 2019), which have high homophily levels ranging from 0.74 to 0.93. We then test our method on several challenging heterophilic datasets, Texas, Wisconsin, Squirrel, and Chameleon (Rozemberczki et al., 2021), which have low homophily levels ranging between 0.11 and 0.23.
Dataset Splits No The paper states it follows "the same core experimental setup of (de Ocáriz Borde et al., 2023) on transductive classification tasks" and reports "Test accuracy in % avg.ed over 10 splits", but it does not explicitly provide the specific percentages or counts for train/validation/test splits within its own text.
Hardware Specification Yes Our experiments were performed using a single NVIDIA RTX A6000 with 48 GB of GDDR6 memory.
Software Dependencies No The paper mentions providing a "virtual environment" for reproducibility via a repository link, but it does not explicitly list specific software dependencies with version numbers (e.g., "Python 3.x, PyTorch 1.x") within the paper's text.
Experiment Setup Yes We include all the details about our experimental setting, including the choice of hyperparameters and the specifications of our machine, in Appendix F. We maintained a constant configuration for the number of layers, hidden dimensions, activation functions, Kmax (4), (pseudo-)similarity functions (minus the euclidean distances among embeddings), dropout rates (0.5), and learning rates (0.01) across all datasets. The architecture details are shown in Tables 13 and 14. We conducted training for a total of 200 epochs for the homophilic datasets, with the exception of the physics dataset, which underwent 100 epochs like the heterophilic datasets.