Constant Curvature Graph Convolutional Networks
Authors: Gregor Bachmann, Gary Becigneul, Octavian Ganea
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we outperform Euclidean GCNs in the tasks of node classification and distortion minimization for symbolic data exhibiting non-Euclidean behavior, according to their discrete curvature. |
| Researcher Affiliation | Academia | 1Department of Computer Science, ETH Z urich 2Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology. |
| Pseudocode | No | The paper does not contain any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of their source code or a link to a code repository. |
| Open Datasets | Yes | We consider the popular node classification datasets Citeseer (Sen et al., 2008), Cora-ML (Mc Callum et al., 2000) and Pubmed (Namata et al., 2012). |
| Dataset Splits | Yes | We use early stopping: we first train for 2000 epochs, then we check every 200 epochs for improvement in the validation cross entropy loss; if that is not observed, we stop. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running its experiments, such as specific GPU or CPU models. |
| Software Dependencies | No | The paper mentions using the 'ADAM optimizer' and 'Re LU' as activation functions, but it does not specify software components with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | All Euclidean parameters are updated using the ADAM optimizer with learning rate 0.01. Curvatures are learned using gradient descent and learning rate of 0.0001. All models are trained for 10000 epochs and we report the minimal achieved distortion. |