Diffusion Twigs with Loop Guidance for Conditional Graph Generation
Authors: Giangiacomo Mercatali, Yogesh Verma, Andre Freitas, Vikas Garg
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide extensive experiments to demonstrate strong performance gains of the proposed method over contemporary baselines in the context of conditional graph generation, underscoring the potential of Twigs in challenging generative tasks such as inverse molecular design and molecular optimization. Code is available at https://github.com/Aalto-Qu ML/Diffusion_twigs. 4 Experiments We conduct a set of comprehensive experiments to demonstrate that Twigs improves over contemporary conditional generation methods. Benchmarks include: molecule generation conditioned over single ( 4.1), and multiple ( 4.2) properties on QM9, as well as molecule optimization on ZINC250K ( 4.3), and network-graph generation conditioned on desired properties ( 4.4). |
| Researcher Affiliation | Collaboration | Giangiacomo Mercatali HES-SO Genève University of Manchester giangiacomo.mercatali@hesge.ch Yogesh Verma Aalto University yogesh.verma@aalto.fi Andre Freitas Idiap Research Institute University of Manchester NBC, CRUK Manchester Institute andre.freitas@idiap.ch Vikas Garg Yai Yai Ltd & Aalto University vgarg@csail.mit.edu |
| Pseudocode | Yes | Algorithm 1 Training Twigs ... Algorithm 2 Generating with Twigs |
| Open Source Code | Yes | Code is available at https://github.com/Aalto-Qu ML/Diffusion_twigs. |
| Open Datasets | Yes | sourced from the QM9 dataset [58] ... generate molecules from the ZINC250K dataset ... We follow the data processing delineated by Jo et al. [37] and provide results for the Community-small [60] and Enzymes datasets [62]. |
| Dataset Splits | No | To ensure consistency and comparability with the baselines, which include JODO [28], EDM [26], EEGSDE [3], Geo LDM [82], TEDMol [49], Equi FM [68], we adhere to the identical dataset preprocessing, training/test data partitions, and evaluation metrics outlined by Huang et al. [28]. |
| Hardware Specification | Yes | D.1 Computational resources All experiments are performed with GPUs, Nvidia A100 or v100. |
| Software Dependencies | No | The paper mentions 'We use Adam optimizers on all experiments.' but does not specify versions for programming languages, libraries (e.g., PyTorch, TensorFlow), or specific optimizer versions, making it insufficient for full software dependency reproducibility. |
| Experiment Setup | No | For Sections 4.1 and 4.2 we follow the same hyperparameters from Huang et al. [28]. For Section 4.3 we follow the hyperparameters from Lee et al. [45], for the MOOD baseline, we explore OOD coefficients between 0.01 and 0.09. For Section 4.4 we follow the hyperparameters from Jo et al. [37]. |