Simplicial Representation Learning with Neural $k$-Forms

Authors: Kelly Maggs, Celia Hacker, Bastian Rieck

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 EXPERIMENTS AND EXAMPLES This section presents examples and use cases of neural k-forms in classification tasks, highlighting their interpretability and computational efficiency. Table 1: Results (mean accuracy and standard deviation of a 5-fold cross-validation) on small graph benchmark datasets that exhibit geometrical node features.
Researcher Affiliation Collaboration Kelly Maggs1, Celia Hacker2, Bastian Rieck3,4 1 Ecole Polytechnique F ed erale de Lausanne (EPFL) 2Max Planck Institute for Mathematics in the Sciences 3AIDOS Lab, Institute of AI for Health, Helmholtz Munich 4Technical University of Munich (TUM)
Pseudocode Yes Algorithm 1 Generate Integration Matrix
Open Source Code Yes We created a proof-of-concept implementation using Py Torch-Geometric (Fey & Lenssen, 2019) and Py Torch-Lightning (Falcon & The Py Torch Lightning team, 2019) and make it publicly available under https://github.com/aidos-lab/neural-k-forms.
Open Datasets Yes Table 1: Results (mean accuracy and standard deviation of a 5-fold cross-validation) on small graph benchmark datasets that exhibit geometrical node features. Parameter numbers are approximate because the number of classes differ. ...TU dataset (Morris et al., 2020)...Molecule Net database (Wu et al., 2018).
Dataset Splits Yes Table 1: Results (mean accuracy and standard deviation of a 5-fold cross-validation) on small graph benchmark datasets that exhibit geometrical node features.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments. It mentions using 'Linux cluster' but no specific models or configurations.
Software Dependencies No The paper mentions 'Py Torch-Geometric (Fey & Lenssen, 2019) and Py Torch-Lightning (Falcon & The Py Torch Lightning team, 2019)' but does not specify their version numbers.
Experiment Setup Yes For the small graph benchmark datasets (AIDS, BZR, COX2, DHFR, Letter-low, Letter-med, Letterhigh), we use a learning rate of 1e 3, a batch size of 16, a hidden dimension of 16, and h = 5 discretisation steps for all k-forms. ... We train all models in the same framework, allocating at most 100 epochs for the training. We also add early stopping based on the validation loss with a patience of 40 epochs. Moreover, we use a learning rate scheduler to reduce the learning rate upon a plateau.