SaNN: Simple Yet Powerful Simplicial-aware Neural Networks
Authors: Sravanthi Gurugubelli, Sundeep Prabhakar Chepuri
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate via numerical experiments that despite being computationally economical, the proposed model achieves state-of-the-art performance in predicting trajectories, simplicial closures, and classifying graphs. |
| Researcher Affiliation | Academia | Sravanthi Gurugubelli & Sundeep Prabhakar Chepuri Indian Institute of Science, Bangalore, Karnataka, India {sravanthig,spchepuri}@iisc.ac.in |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. The model's operations are described textually and illustrated with diagrams (Figure 1, Figure 2). |
| Open Source Code | No | The paper does not include an explicit statement about releasing source code or a direct link to a code repository for the methodology described. |
| Open Datasets | Yes | We evaluate the trajectory prediction ability of Sa NN on four datasets, namely, Ocean (Roddenberry et al., 2021), Synthetic (Roddenberry et al., 2021), Planar (Cordonnier & Loukas, 2018), and Mesh (Cordonnier & Loukas, 2018). ... We evaluate on the following TUDatasets (Morris et al., 2020): Proteins, NCI1, IMDB-B, IMDB-M, Reddit-B and Reddit-M. |
| Dataset Splits | Yes | Details about the experimental setup, datasets, attributes, hyperparameters, evaluation metrics, training, validation, and test splits for the three tasks are provided in Appendix H. ... We use 5-fold cross-validation for evaluating the performance of each of the deep models considered including Sa NN. ... We perform a 60% 20% 20% split of the data for training, validation, and testing. ... We use the predefined data splits that are used in Errica et al. (2020) for performing 10-fold cross-validation. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only generally refers to training processes without mentioning the underlying hardware. |
| Software Dependencies | No | The paper mentions using a 'tanh activation function' but does not specify any software libraries (e.g., PyTorch, TensorFlow) or their version numbers, which are crucial for reproducibility. |
| Experiment Setup | Yes | Details about the experimental setup, datasets, attributes, hyperparameters, evaluation metrics, training, validation, and test splits for the three tasks are provided in Appendix H. ... Table 5: Hyperparameters of Sa NN set for trajectory prediction experiments. ... Table 7: Hyperparameters of Sa NN set for simplicial closure prediction experiments. ... Table 9: Hyperparameters of Sa NN set for graph classification experiments. |