Generative Modelling of Structurally Constrained Graphs

Authors: Manuel Madeira, Clement Vignac, Dorina Thanou, Pascal Frossard

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Con Struct demonstrates versatility across several structural and edge-deletion invariant constraints and achieves state-of-the-art performance for both synthetic benchmarks and attributed real-world datasets. For example, by incorporating planarity constraints in digital pathology graph datasets, the proposed method outperforms existing baselines, improving data validity by up to 71.1 percentage points.
Researcher Affiliation Academia Manuel Madeira EPFL, Lausanne, Switzerland manuel.madeira@epfl.ch Clément Vignac EPFL, Lausanne, Switzerland Dorina Thanou EPFL, Lausanne, Switzerland Pascal Frossard EPFL, Lausanne, Switzerland
Pseudocode Yes Algorithm 1: Training Algorithm for Graph Discrete Diffusion Model Algorithm 2: Projector Algorithm 3: Sampling Algorithm for Constrained Graph Discrete Diffusion Model
Open Source Code Yes Our code and data are available at https://github.com/manuelmlmadeira/Con Struct.
Open Datasets Yes We focus on three synthetic datasets with different structural properties: the planar dataset [54], composed of planar and connected graphs; the tree dataset [7], composed of connected graphs without cycles (tree graph); and the lobster dataset [46]... We open-source both of them, representing to the best of our knowledge the first open-source digital pathology datasets specifically tailored for graph generation.
Dataset Splits Yes We follow the splits originally proposed for each of the datasets: 80% of the graphs are used in the training set and the remaining 20% are allocated to the test set. We use 20% of the train set as validation set.
Hardware Specification Yes All our experiments were run in a single Nvidia V100 32Gb GPUs.
Software Dependencies No The paper mentions "AMSGrad [65] version of Adam W [49]" as the optimizer, but does not provide version numbers for other key software components like Python, PyTorch, or specific libraries.
Experiment Setup Yes Regarding the optimizer, we used the AMSGrad [65] version of Adam W [49] with a learning rate of 0.0002 and weight decay of 1e-12 for all the experiments. ... λ is an hyperparameter that is tuned to balance both loss terms.