Ultrahyperbolic Neural Networks
Authors: Marc Law
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We now evaluate our approach on different classification tasks on graphs. We first show that our optimization framework introduced in Section 3.4 learns meaningful representations on a toy hierarchical graph with cycles. We then apply our framework in standard classification tasks. |
| Researcher Affiliation | Industry | Marc T. Law; This project was entirely funded by NVIDIA corporation while I was working from home during the COVID-19 pandemic. |
| Pseudocode | No | The paper describes procedures and methods but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described, nor does it state that code is released. |
| Open Datasets | Yes | We test our approach on Zachary s karate club dataset [33]. ... We now evaluate the generalization performance of our GCN in the semisupervised node classification task on three citation network datasets: Citeseer, Cora and Pubmed [26]. ... We also evaluate our approach on commonly used graph kernel benchmark datasets [12] whose statistics are reported in Table 4. |
| Dataset Splits | Yes | During training, all the nodes and edges are preserved, but only 20 nodes per class are labeled, and 500 nodes are used for validation in total, the rest for test. ... The evaluation is done via 10-fold cross validation. |
| Hardware Specification | Yes | We report in Table 1 the training times of our Pytorch [22] implementation to train 25,000 iterations on a machine equipped with a 6-core Intel i7-7800X CPU and NVIDIA Ge Force RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions 'Pytorch [22] implementation' but does not specify version numbers for Pytorch or other ancillary software components. |
| Experiment Setup | Yes | Our MLP ϕθ : X Hp contains three hidden layers of 104 hidden units each, with standard Re LU as nonlinear activation function. ... τ = 10 2 is a fixed temperature value... We follow the experimental protocol of Appendix A of [18] and learn a GCN with 2 hidden layers. ... trained GCNs whose dimensionality of each layer is d = 4... same number of GNN layers, optimizers, learning rate, and manifold dimensionality d reported in Table 5. |