Learning From Simplicial Data Based on Random Walks and 1D Convolutions

Authors: Florian Frantzen, Michael T Schaub

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically evaluate SCRa Wl on real-world datasets and show that it outperforms other simplicial neural networks. 5 EXPERIMENTAL RESULTS We evaluate SCRa Wl on a variety of datasets and compare it to other simplicial neural networks1.
Researcher Affiliation Academia Florian Frantzen Department of Computer Science RWTH Aachen University, Germany florian.frantzen@cs.rwth-aachen.de Michael T. Schaub Department of Computer Science RWTH Aachen University, Germany schaub@cs.rwth-aachen.de
Pseudocode No The paper describes the SCRa Wl architecture and its steps using descriptive text and figures, but it does not include formal pseudocode or algorithm blocks.
Open Source Code Yes Source code and datasets are available at https://git.rwth-aachen.de/netsci/scrawl.
Open Datasets Yes Following Ebli et al. (2020), we use SCRa Wl to impute missing citation counts for a subset of the Semantic Scholar co-authorship network. In a second set of experiments, we perform vertex classification on the primary-school and high-school social contact datasets (Stehl e et al., 2011; Chodrow et al., 2021).
Dataset Splits No The paper mentions training, validation loss, and validation accuracy, but does not explicitly state the specific proportions or counts for training, validation, and test dataset splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions PyTorch and Topo X as software used for implementation but does not specify their version numbers, which are required for reproducibility.
Experiment Setup Yes For all experiments, we use the Adam optimizer with an initial learning rate of 10 3. The learning rate is reduced by a factor of 0.5 if the validation loss does not improve for 10 epochs. Training is stopped once the learning rate drops below 10 6. ... The walk length ℓis the primary hyperparameter... For the Semantic Scholar dataset, we choose a walk length of 5... On other datasets, the model is trained with a walk length of 50. ... Each module is configured with a local window size of s = 4, a kernel size of dkern = 8, a hidden feature size of d = 32, and a mean pooling operation. ... For the social contact datasets, we use 4 layers... Each SCRa Wl module is again configured identically with a local window size of s = 8, a kernel size of dkern = 8, a hidden feature size of d = 128, and a mean pooling operation.