Neural Message Passing for Multi-Relational Ordered and Recursive Hypergraphs
Authors: Naganand Yadati
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate the effectiveness of proposed instances of G-MPNN and MPNN-R. The code is available. 1 |
| Researcher Affiliation | Academia | Naganand Yadati naganand@iisc.ac.in Department of Computer Science and Automation Indian Institute of Science Bangalore, Karnataka, 560012 |
| Pseudocode | No | The paper provides mathematical equations and descriptions of functions, but it does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available. 1https://github.com/naganandy/G-MPNN-R |
| Open Datasets | Yes | We evaluate MPNN-R on the task of semi-supervised classification of documents in academic network datasets. The input is a 1-recursive hypergraph with documents as vertices, words as features (bag-ofwords), authors as depth 1-hyperedges, and references in documents as depth 0 hyperedges. The task is multi-class classification of documents given the input recursive hypergraph, and a small fraction of labelled documents in the dataset (we call the fraction label rate, please see label rate and other details in dataset statistics table in the appendix. Method Cora DBLP ACM ar Xiv |
| Dataset Splits | No | The paper mentions 'train-test splits' and discusses a 'label rate' with details in the appendix, but it does not explicitly define a separate validation split or its proportion/size. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | No | All models are implemented in Py Torch [58] using the Adam optimiser [42]. |
| Experiment Setup | Yes | We use a 2-layer MPNN-R of Equation 5 with Re LU as the non-linear activation function. We use 1024-dimensional hidden embeddings with c-dimensional output embeddings where c is the number of classes as shown in dataset statistics table in the appendix. We set the hyperedge-dependent vertex weights to one for all vertices i.e. Iue = 1 if u e. Please see the appendix for comparison with different hyperedge-dependent vertex weights. We use the popularly-used (symmetrically-normalised) mean aggregator to aggregate messages from the neighbourhood i.e. We found that sum and max aggregators perform comparably to the mean aggregator. Please see the appendix for detailed ablation studies. We train our MPNN-R with cross entropy loss function on the labelled vertices following standard practice of prior works [43, 20, 78]. All models are implemented in Py Torch [58] using the Adam optimiser [42]. |