Neural Graph Generation from Graph Statistics

Authors: Kiarash Zahirnia, Yaochen Hu, Mark Coates, Oliver Schulte

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluation on 8 datasets indicates that our deep GGM generates more realistic graphs than the traditional non-neural GGMs when both are learned from graph statistics only. We also compare our deep GGM trained on statistics only, to state-of-the-art deep GGMs that are trained on the entire adjacency matrix. The results show that graph statistics are often sufficient to build a competitive deep GGM that generates realistic graphs while protecting local privacy.
Researcher Affiliation Collaboration Kiarash Zahirnia1, Yaochen Hu2, Mark Coates3, Oliver Schulte1 1Simon Fraser University, 2Huawei Noah s Ark Lab, 3Mc Gill University
Pseudocode No The paper describes methods and models with equations and figures but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes The implementation and datasets are provided at Gen Stat repository https://github.com/kiarashza/Gen Stat.git and explained in the Appendix Section 7.9.
Open Datasets Yes We evaluate our model on 3 synthetic datasets: the Lobster trees (Lobster), Grid [88] and Triangle-Grid [89] datasets, all of which consist of graphs with regular structures. We also evaluate our models on 5 real datasets: ogbg-molbbbp (ogbg-mol) [27], Protein [12], IMDb [84], PTC [73] and MUTAG [8]. The implementation and datasets are provided at Gen Stat repository https://github.com/kiarashza/Gen Stat.git and explained in the Appendix Section 7.9.
Dataset Splits Yes We randomly partitioned the datasets into train (70%), validation (10%), and test (20%) sets [6, 88, 89].
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiments.
Experiment Setup No The paper mentions general training aspects like 'all models are trained with one random weight initialization to keep the training time feasible', and refers to an appendix for more details ('See Appendix Section 7.2 for the neural network design and more details concerning implementation.'), but it does not provide concrete hyperparameter values or detailed system-level training configurations in the main text.