Improving Graph Generation by Restricting Graph Bandwidth
Authors: Nathaniel Lee Diamant, Alex M Tseng, Kangway V. Chuang, Tommaso Biancalani, Gabriele Scalia
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally validate our method on both synthetic and real graphs, comparing bandwidth-constrained architectures and non-constrained baselines. ... We extensively validate our strategy on synthetic and real datasets, including molecular graphs. |
| Researcher Affiliation | Industry | 1Department of Artificial Intelligence and Machine Learning, Research and Early Development, Genentech, USA. Correspondence to: Nathaniel Diamant <diamant.nathaniel@gene.com>, Gabriele Scalia <scalia.gabriele@gene.com>. All authors are employees of Genentech, Inc. and shareholders of Roche. |
| Pseudocode | Yes | Algorithm 1 GINEStack. Algorithm 2 Graphite decoder. Algorithm 3 Modified EDP-GNN architecture. |
| Open Source Code | Yes | The implementation is made available1. ... 1https://github.com/Genentech/ bandwidth-graph-generation |
| Open Datasets | Yes | All datasets, except Peptides-func, are available through the TUDataset collection (Morris et al., 2020). Peptides-func is available in the Long Range Graph Benchmark (Dwivedi et al., 2022). |
| Dataset Splits | Yes | All models were trained for 100 epochs of 30 training batches and nine validation batches. The batch size was fixed at 32. |
| Hardware Specification | No | The paper mentions 'GPU utilization' in the context of memory usage, but does not provide specific details on the hardware used for experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions 'PyTorch' and 'PyTorch Geometric' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | All models were trained for 100 epochs of 30 training batches and nine validation batches. The batch size was fixed at 32. The AdamW optimizer (Loshchilov & Hutter, 2019) was used with a cosine annealed learning rate (Loshchilov & Hutter, 2017). |