Semi-Implicit Graph Variational Auto-Encoders

Authors: Arman Hasanzadeh, Ehsan Hajiramezanali, Krishna Narayanan, Nick Duffield, Mingyuan Zhou, Xiaoning Qian

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments with a variety of graph data show that SIG-VAE significantly outperforms state-of-the-art methods on several different graph analytic tasks.
Researcher Affiliation Academia Department of Electrical and Computer Engineering, Texas A&M University {armanihm, ehsanr, duffieldng, krn, xqian}@tamu.edu Mc Combs School of Business, The University of Texas at Austin mingyuan.zhou@mccombs.utexas.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The implementation of our proposed model is accessible at https://github.com/sigvae/SIGraph VAE.
Open Datasets Yes We consider three graph datasets with node attribbutes Citeseer, Cora, and Pubmed [28]. We further consider five graph datasets without node attributes USAir, NS [22], Router [29], Power [34] and Yeast [32].
Dataset Splits Yes We preprocess and split the datasets as done in Kipf and Welling [18] with validation and test sets containing 5% and 10% of network links, respectively.
Hardware Specification No The paper mentions utilizing 'Texas A&M High Performance Research Computing and Texas Advanced Computing Center for providing computational resources' but does not provide specific hardware details such as GPU/CPU models or memory.
Software Dependencies No The paper mentions 'Tensorflow [1]' and 'The Py GSP package [12]' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes We learn the model parameters for 3500 epochs with the learning rate 0.0005 and the validation set used for early stopping. The latent space dimension is set to 16.