Graph Generation with $K^2$-trees

Authors: Yunhui Jang, Dongwoo Kim, Sungsoo Ahn

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we extensively evaluate our algorithm on four general and two molecular graph datasets to confirm its superiority for graph generation.
Researcher Affiliation Academia Yunhui Jang, Dongwoo Kim, Sungsoo Ahn Pohang University of Science and Technology {uni5510, dongwookim, sungsoo.ahn}@postech.ac.kr
Pseudocode Yes A CONSTRUCTION OF A K2 TREE FROM THE GRAPH Algorithm 1 K2 tree construction B CONSTRUCTING A GRAPH FROM THE K2 TREE Algorithm 2 Graph G construction C GENERALIZING K2 TREE TO ATTRIBUTED GRAPHS Algorithm 3 Featured K2 tree construction Algorithm 4 Featured graph G construction
Open Source Code Yes All experimental code related to this paper is available at https://github. com/yunhuijang/HGGT.
Open Datasets Yes To validate the effectiveness of our algorithm, we test our method on popular graph generation benchmarks across six graph datasets: Community, Enzymes (Schomburg et al., 2004), Grid, Planar, ZINC (Irwin et al., 2012), and QM9 (Ramakrishnan et al., 2014).
Dataset Splits Yes We used the same split with GDSS (Jo et al., 2022) for Community-small, Enzymes, and Grid datasets. Otherwise, we used the same split with SPECTRE (Luo et al., 2022) for the Planar dataset.
Hardware Specification Yes We conduct all the experiments using a single RTX 3090 GPU.
Software Dependencies No The paper states, 'We used Py Torch (Paszke et al., 2019) to implement HGGT...' but does not provide specific version numbers for PyTorch or any other software libraries, which is necessary for reproducibility.
Experiment Setup Yes We fix k = 2 and perform the hyperparameter search to choose the best learning rate in {0.0001, 0.0002, 0.0005, 0.001} and the best dropout rate in {0, 0.1}. We select the model with the best MMD with the lowest average of three graph statistics: degree, clustering coefficient, and orbit count. Finally, we provide the hyperparameters used in the experiment in Table 6. ... We fix k = 2 and perform the hyperparameter search to choose the best number of layers in {2, 3} and select the model with the best validity. In addition, we provide the hyperparameters used in the experiment in Table 6.