Subgraph Pattern Neural Networks for High-Order Graph Evolution Prediction
Authors: Changping Meng, S Chandra Mouli, Bruno Ribeiro, Jennifer Neville
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results show that our proposed method significantly outperforms other stateof-the-art methods designed for static and/or single node/link prediction tasks. |
| Researcher Affiliation | Academia | Changping Meng, S Chandra Mouli, Bruno Ribeiro, Jennifer Neville Department of Computer Science Purdue University, West Lafayette, IN {meng40, chandr}@purdue.edu, {ribeiro, neville}@cs.purdue.edu |
| Pseudocode | No | The paper describes the model architecture and steps in detail but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Source code is available at https://github.com/PurdueMINDS/SPNN. |
| Open Datasets | Yes | We use two representative heterogeneous graph datasets with temporal information. DBLP (Sun et al. 2011b) contains scientific papers... Facebook is a sample of the Facebook users from one university. ... Word Net is a knowledge graph that groups words into synonyms... The WN18 dataset is a subset of Word Net... |
| Dataset Splits | Yes | 20% of the training examples are separated as validation for early stopping. |
| Hardware Specification | Yes | The server is an Intel E5 2.60GHz CPU with 512 GB of memory. |
| Software Dependencies | No | The paper states 'We implement SPNN in Theano' but does not provide version numbers for Theano or any other software dependencies. |
| Experiment Setup | Yes | The loss function is the negative log likelihood plus L1 and L2 regularization penalties over the parameters, both with regularization penalty 0.001. We train SPNN using stochastic gradient descent over a maximum of 30000 epochs and learning rate 0.01. |