Bayesian inference on random simple graphs with power law degree distributions
Authors: Juho Lee, Creighton Heaukulani, Zoubin Ghahramani, Lancelot F. James, Seungjin Choi
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply this inference procedure to several network datasets that are commonly observed to possess power law structure. Our experiments show that accurately capturing this power law structure improves performance on tasks predicting missing edges in the networks. |
| Researcher Affiliation | Collaboration | 1Pohang University of Science and Technology, Pohang, South Korea 2University of Cambridge, Cambridge, UK 3Uber AI Labs, San Francisco, CA, USA 4Hong Kong University of Science and Technology, Hong Kong. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link for open-source code for the methodology described. |
| Open Datasets | Yes | The polblogs dataset contains the links between political blogs (judged by hyperlinks between the front webpages of the blogs) in the period leading up to the 2004 US presidential election, which is observed to exhibit power law degree distributions by Adamic & Glance (2005). The Facebook107 dataset contains friendships between users of a Facebook app, collected by Leskovec & Mc Auley (2012). |
| Dataset Splits | Yes | We selected the value of β from among the grid {0.6, 0.9, 1.0, 1.2, 1.4} with 5-fold cross validation on the training set. |
| Hardware Specification | No | The paper does not provide specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'Adam (Kingma & Ba, 2015)' as an optimizer but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We used a mini-batch size of 5,000 edges (note that the training dataset corresponds to almost 10 million observed edges). We ran each inference procedure for 20,000 steps of stochastic gradient ascent updates, using Adam (Kingma & Ba, 2015) to adjust the learning rates at each step. We selected the value of β from among the grid {0.6, 0.9, 1.0, 1.2, 1.4} with 5-fold cross validation on the training set. |