Disentangled Spatiotemporal Graph Generative Models
Authors: Yuanqi Du, Xiaojie Guo, Hengning Cao, Yanfang Ye, Liang Zhao6541-6549
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Qualitative and quantitative experiments on both synthetic and real-world datasets demonstrate the superiority of the proposed model over the state-of-the-arts by up to 69.2% for graph generation and 41.5% for interpretability. |
| Researcher Affiliation | Collaboration | Yuanqi Du1, Xiaojie Guo2*, Hengning Cao1, Yanfang Ye3, Liang Zhao4 1George Mason University, Fairfax, US 2JD.COM Silicon Valley Research Center, Mountain View, CA, US 3University of Notre Dame, Notre Dame, US 4Emory University, Atlanta, US |
| Pseudocode | Yes | Algorithm 1: Information-iterative-thresholding algorithm |
| Open Source Code | No | The paper does not include an explicit statement about releasing the source code or provide a link to a code repository for the described methodology. |
| Open Datasets | Yes | We validate the effectiveness of our proposed models on two synthetic datasets and two real-world datasets, (1) Dynamic Waxman Random Graphs, (2) Dynamic Random Geometry Graphs, (3) Protein Folding Dataset, and (4) Traffic Dataset MERT-LA (Du et al. 2021b). The first two are well-known spatial network datasets (Bradonji c, Hagberg, and Percus 2007; Waxman 1988), which randomly place nodes in a geometry and the edges are connected by predefined distance measures, with variances through the time dimension. The protein folding dataset consists of the folding steps of a protein with 8 amino acids (Guo et al. 2020b). The traffic dataset contains sequences of graphs which contains traffic speeds connected by 207 sensors (Jagadish et al. 2014). |
| Dataset Splits | No | The paper refers to 'real training data distribution' but does not explicitly provide specific train/validation/test dataset splits (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper mentions an 'NVIDIA GPU Grant' in the acknowledgements, implying the use of GPUs, but it does not explicitly state specific hardware models (e.g., GPU/CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | No | The paper mentions setting β1, β2, β3, and β4 to 1 but does not provide comprehensive experimental setup details such as specific learning rates, batch sizes, number of epochs, or optimizer settings. |