Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Stochastic Blockmodels meet Graph Neural Networks
Authors: Nikhil Mehta, Lawrence Carin Duke, Piyush Rai
ICML 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on several benchmarks demonstrate encouraging results on link prediction while learning an interpretable latent structure that can be used for community discovery. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Duke University 2Department of Computer Science, IIT Kanpur. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., a URL or an explicit statement of code release) for its source code. |
| Open Datasets | Yes | We consider ๏ฌve real-world datasets... NIPS12: The NIPS12 coauthor network (Zhou, 2015)... Yeast: The Yeast protein interaction network (Zhou, 2015)... Cora: Cora network is a citation network... Citeseer: Citeseer is a citation network... Pubmed: A citation network... |
| Dataset Splits | Yes | For all datasets, we hold out 10% and 5% of the links as our test set and validation set, respectively, and use the validation set to ๏ฌne-tune the hyperparameters. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running its experiments. |
| Software Dependencies | No | The paper mentions using a "graph convolutional network (GCN)" and "Stochastic Gradient Variational Bayes (SGVB)" which imply certain software frameworks, but it does not specify any software components with version numbers (e.g., TensorFlow 2.x, PyTorch 1.x). |
| Experiment Setup | Yes | For all datasets, we hold out 10% and 5% of the links as our test set and validation set, respectively, and use the validation set to ๏ฌne-tune the hyperparameters. ... The hyperparameter settings used for all experiments are included in the Supplementary Material. |