Evaluating Graph Generative Models with Contrastively Learned Features

Authors: Hamed Shirzad, Kaveh Hassani, Danica J. Sutherland

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that Graph Substructure Networks (GSNs), which in a way combine both approaches, are better at distinguishing the distances between graph datasets. The code used for this project is available in: https://github.com/hamed1375/ Self-Supervised-Models-for-GGM-Evaluation. 5 Experimental Results
Researcher Affiliation Collaboration Hamed Shirzad UBC shirzad@cs.ubc.ca Kaveh Hassani Autodesk AI Lab kaveh.hassani@autodesk.com Danica J. Sutherland UBC & Amii dsuth@cs.ubc.ca
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code used for this project is available in: https://github.com/hamed1375/ Self-Supervised-Models-for-GGM-Evaluation.
Open Datasets Yes Datasets: Following [46], we use six diverse datasets ( three synthetic and three real-world) that are frequently used in literature including: (1) Lobster [9]... (2) 2D Grid graphs [9, 25, 52]... (3) Proteins [11]... (4) 3-hop Ego networks [52] extracted from the Cite Seer network [39]... (5) Community [52] graphs generated by Erd os Rényi model [12], and (6) Zinc [22]...
Dataset Splits No The paper describes dataset usage (e.g., 'We randomly sample 1000 graphs from Zinc dataset'), and mentions 'a held-out set of the real data (half of the original dataset)' in the context of benchmarking generative models, but does not specify a general train/validation/test split for its own model training and evaluation.
Hardware Specification No This research was enabled in part by support, computational resources, and services provided by the Canada CIFAR AI Chairs program, the Natural Sciences and Engineering Research Council of Canada, West Grid, and the Digital Research Alliance of Canada.
Software Dependencies No The paper mentions software like 'GIN architecture', 'Graph CL', and 'Info Graph', but does not provide specific version numbers for these or other dependencies.
Experiment Setup Yes In all of the experiments we train the model on the full dataset in an self-supervised manner. Following [46], we take the dataset and make perturbations on the dataset and see what is the trend in the measurements as the perturbation degree increases... We explicitly enforce the layers of our GIN to have a controlled Lipschitz constant... we fix the λ Lipschitz factor to 1.0 in the experiments.