Editing Partially Observable Networks via Graph Diffusion Models

Authors: Puja Trivedi, Ryan A. Rossi, David Arbour, Tong Yu, Franck Dernoncourt, Sungchul Kim, Nedim Lipka, Namyong Park, Nesreen K. Ahmed, Danai Koutra

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the ability of graph diffusion models to perform editing tasks, and demonstrate the benefits of using SGDM. Specifically, we seek to answer the following research questions: (RQ1) Is there a benefit to SAMPLING on the expansion and denoising tasks? (RQ2) Is there a benefit to using GLOBAL-CONTEXT in our proposed graph editing tasks? (RQ3) How do different GDDM backbones affect the performance of SGDM? Below, we first describe our evaluation setup.
Researcher Affiliation Collaboration Puja Trivedi 1 Ryan A. Rossi 2 David Arbour 2 Tong Yu 2 Franck Dernoncourt 2 Sungchul Kim 2 Nedim Lipka 2 Namyong Park 3 Nesreen K. Ahmed 4 Danai Koutra 1 1CSE Dept, University of Michigan, Ann Arbor 2Adobe Research Inc 3Carnegie Mellon University 4Intel AI Research. Correspondence to: Puja Trivedi <pujat@umich.edu>.
Pseudocode Yes Algorithm 1 SGDM: Subgraph-based Diffusion ... Algorithm 2 Editing with SGDM ... Algorithm 3 Large Graph Generation with SGDM
Open Source Code Yes Our code can be accessed at https://github.com/pujacomputes/sgdm.
Open Datasets Yes For the editing tasks, we consider 3 large, single networks BA-Shapes, Pol Blogs and CORA (Table 1) and corrupt them to create the incomplete, noisy observed graphs. ... BA-Shapes dataset (Ying et al., 2019)
Dataset Splits No The paper mentions 'train' and 'test' in the context of model training and evaluation but does not specify a separate 'validation' split or its size/percentage. It also mentions 'standard split' in related work, but not for its own experimental setup.
Hardware Specification Yes We trained all models using Tesla T4s (16GB GPU Memory, 124GB RAM).
Software Dependencies No The paper mentions 'Py Torch Geometric (Fey & Lenssen, 2019) and Py Torch' but does not specify their version numbers. It also refers to official code by name (e.g., 'Di GRESS', 'EDGE', 'GDSS') but without associated version numbers for reproducibility.
Experiment Setup Yes We trained all models using Tesla T4s (16GB GPU Memory, 124GB RAM). To ensure fair comparison across methods and prevent overfitting to a corrupted graph, all models are trained for at most 24 hours or 5000 epochs, which ever came first. ... Hyper-parameters and architectures suggested by each method s authors are used.