Graph Normalizing Flows

Authors: Jenny Liu, Aviral Kumar, Jimmy Ba, Jamie Kiros, Kevin Swersky

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On supervised tasks, graph normalizing flows perform similarly to message passing neural networks, but at a significantly reduced memory footprint, allowing them to scale to larger graphs. In the unsupervised case, we combine graph normalizing flows with a novel graph auto-encoder to create a generative model of graph structures. Our model is permutation-invariant, generating entire graphs with a single feed-forward pass, and achieves competitive results with the state-of-the art auto-regressive models, while being better suited to parallel computing architectures. We experiment on two types of tasks. Transductive learning tasks consist of semi-supervised document classification in citation networks (Cora and Pubmed datasets)... Inductive Learning tasks consist of PPI (Protein-Protein Interaction Dataset) [31] and QM9 Molecule property prediction dataset [21].
Researcher Affiliation Collaboration Jenny Liu University of Toronto Vector Institute jyliu@cs.toronto.edu Aviral Kumar UC Berkeley aviralk@berkeley.edu Jimmy Ba University of Toronto Vector Institute jba@cs.toronto.edu Jamie Kiros Google Research kiros@google.com Kevin Swersky Google Research kswersky@google.com
Pseudocode No The paper describes the GRev Nets architecture and message passing steps using mathematical equations and textual descriptions (e.g., 'Figure 1 depicts the procedure in detail' and equations (3) and (4)), but it does not include explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions using scripts from the Graph RNN codebase [29] (https://github.com/Jiaxuan You/graph-generation) for baselines, but it does not provide a statement or link for the open-source code of the Graph Normalizing Flows methodology presented in this paper.
Open Datasets Yes Datasets/Tasks: We experiment on two types of tasks. Transductive learning tasks consist of semi-supervised document classification in citation networks (Cora and Pubmed datasets)... Inductive Learning tasks consist of PPI (Protein-Protein Interaction Dataset) [31] and QM9 Molecule property prediction dataset [21]. We compare our graph generation model on two datasets, COMMUNITY-SMALL and EGO-SMALL from Graph RNN [30].
Dataset Splits No 1% train uses 1% of the data for training to replicate the settings in [18]. 80% of the data was used for training and the remainder for testing. The paper specifies training and testing splits for different experiments but does not explicitly provide a separate validation split percentage or count for all experiments.
Hardware Specification No The paper mentions '12G GPU machines' in the context of memory footprint analysis, but it does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running experiments.
Software Dependencies No The paper mentions using the 'Graph RNN codebase' which is typically Python-based, but it does not specify exact versions of programming languages, libraries, or other software dependencies used for their own model's implementation.
Experiment Setup Yes where C is a temperature hyperparameter, set to 10 in our experiments. In this case, we generate node features H using random Gaussian variables hi N(0, σ2I), where we use σ2 = 0.3. The GNN consists of 10 MP steps... Our GNF consists of 10 MP steps... Number of MP steps is fixed to 4. The model was trained for 350k steps, as in [6].