GMNN: Graph Markov Neural Networks
Authors: Meng Qu, Yoshua Bengio, Jian Tang
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the performance of GMNN on three tasks, including object classification, unsupervised node representation learning, and link classification. |
| Researcher Affiliation | Academia | 1Montr eal Institute for Learning Algorithms (MILA) 2University of Montr eal 3Canadian Institute for Advanced Research (CIFAR) 4HEC Montr eal. |
| Pseudocode | Yes | Algorithm 1 Optimization Algorithm |
| Open Source Code | No | The paper does not provide any statement or link regarding the availability of open-source code for the described methodology. |
| Open Datasets | Yes | For object classification, we follow existing studies (Yang et al., 2016; Kipf & Welling, 2017; Veliˇckovi c et al., 2018) and use three benchmark datasets from Sen et al. (2008) for evaluation, including Cora, Citeseer, Pubmed. ... For link classification, we construct two datasets from the Bitcoin Alpha and the Bitcoin OTC datasets (Kumar et al., 2016; 2018) respectively. |
| Dataset Splits | Yes | In each dataset, 20 objects from each class are treated as labeled objects, and we use the same data partition as in Yang et al. (2016). Accuracy is used as the evaluation metric. ... Table 1. Dataset statistics. ... # Training # Validation # Test Cora OC / NRL 2,708 5,429 1,433 7 140 500 1,000 |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions optimizers (RMSProp, Adam), activation functions (ReLU, softmax), and regularization (Dropout) along with their references, but does not provide specific version numbers for any software libraries or frameworks used. |
| Experiment Setup | Yes | For GMNN, pφ and qθ are composed of two graph convolutional layers with 16 hidden units and the Re LU activation function (Nair & Hinton, 2010), followed by the softmax function, as suggested in Kipf & Welling (2017). Dropout (Srivastava et al., 2014) is applied to the network inputs with p = 0.5. We use the RMSProp optimizer (Tieleman & Hinton, 2012) during training, with the initial learning rate as 0.05 and weight decay as 0.0005. In each iteration, both networks are trained for 100 epochs. |