HiGen: Hierarchical Graph Generative Networks

Authors: Mahdi Karami

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical studies demonstrate the effectiveness and scalability of our proposed generative model, achieving state-of-the-art performance in terms of graph quality across various benchmark datasets. In our empirical studies, we compare the proposed hierarchical graph generative network against state-of-the-art autoregressive models: GRAN and Graph RNN models, diffusion models: Di Gress (Vignac et al., 2022), GDSS (Jo et al., 2022), Graph ARM (Kong et al., 2023) and EDGE (Chen et al., 2023) , and a GAN-based model: SPECTRE (Martinkus et al., 2022), on a range of synthetics and real datasets of various sizes.
Researcher Affiliation Academia Mahdi Karami mahdi.karami@ualberta.ca
Pseudocode Yes Algorithm 1 Training step of Hi Gen and Algorithm 2 Sampling from Hi Gen
Open Source Code Yes Code available at https://github.com/Karami-m/Hi Gen_main.
Open Datasets Yes Datasets: We used 5 different benchmark graph datasets: (1) the synthetic Stochastic Block Model (SBM) dataset...; (2) the Protein including 918 protein graphs... (Dobson & Doig, 2003), (3) the Enzyme that has 587 protein graphs... (Schomburg et al., 2004) and (4) the Ego dataset... (Sen et al., 2008). (5) Point Cloud dataset... (Neumann et al., 2013).
Dataset Splits Yes An 80%-20% split was randomly created for training and testing and 20% of the training data was used for validation purposes.
Hardware Specification Yes The experiments for the Enzyme and Stochastic Block Model datasets were conducted on a Mac Book Air with an M2 processor and 16GB RAM, while the rest of the datasets were trained using an NVIDIA L4 Tensor Core GPU with 24GB RAM as an accelerator. The sampling processes were carried out on a server machine equipped with a 32-core AMD Rome 7532 CPU and 128 GB of RAM.
Software Dependencies No The paper mentions the use of 'Adam optimizer' but does not specify software library versions (e.g., PyTorch, TensorFlow) or their specific versions.
Experiment Setup Yes For training, the Hi Gen models used the Adam optimizer Kingma & Ba (2014) with a learning rate of 5e-4 and default settings for β1 (0.9), β2 (0.999), and ϵ (1e-8). In our experiments, the Graph GPS models consisted of 8 layers... The hidden dimensions were set to 64 for the Protein, Ego, and Point Cloud datasets, and 128 for the Stochastic Block Model and Enzyme datasets. The number of mixtures was set to K=20.