InstructG2I: Synthesizing Images from Multimodal Attributed Graphs
Authors: Bowen Jin, Ziqi Pang, Bingjun Guo, Yu-Xiong Wang, Jiaxuan You, Jiawei Han
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on three datasets from different domains demonstrate the effectiveness and controllability of our approach. |
| Researcher Affiliation | Academia | Bowen Jin, Ziqi Pang, Bingjun Guo, Yu-Xiong Wang, Jiaxuan You, Jiawei Han Department of Computer Science University of Illinois at Urbana-Champaign bowenj4@illinois.edu |
| Pseudocode | No | The paper presents equations and framework diagrams but does not include explicit pseudocode or algorithm blocks labeled as such. |
| Open Source Code | Yes | The code is available at https://github.com/Peter Griffin Jin/Instruct G2I. |
| Open Datasets | Yes | We conduct experiments on three MMAGs from distinct domains: ART500K [27], Amazon [16], and Goodreads [37]. |
| Dataset Splits | No | The paper mentions 'We randomly mask 1,000 nodes as testing nodes from the graph for all three datasets and serve the remaining nodes and edges as the training graph.' (Appendix A.5), specifying train and test sets, but does not explicitly state a separate validation split or how it was derived. |
| Hardware Specification | Yes | The training of all methods including INSTRUCTG2I and baselines on ART500K and Amazon are conducted on two A6000 GPUs, while that on Goodreads is performed on four A40 GPUs. |
| Software Dependencies | Yes | Backbone SD Stable Diffusion 1.5 |
| Experiment Setup | Yes | The detailed hyperparameters are in Table 4 (Appendix A.5), which includes Optimizer (Adam W), Adam ϵ (1e-8), Adam pβ1, β2q ((0.9, 0.999)), Weight decay (1e-2), Batch size per GPU (16), Gradient Accumulation (4), Epochs (10/30), Resolution (256), and Learning rate (1e-5). |