Recurrent Dirichlet Belief Networks for interpretable Dynamic Relational Data Modelling
Authors: Yaqiong Li, Xuhui Fan, Ling Chen, Bin Li, Zheng Yu, Scott A. Sisson
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The extensive experiment results on real-world data validate the advantages of the Recurrent-DBN over the state-of-the-art models in interpretable latent structure discovery and improved link prediction performance. |
| Researcher Affiliation | Academia | 1Centre for Artificial Intelligence, University of Technology Sydney 2School of Mathematics & Statistics, University of New South Wales, Sydney 3School of Computer Science, Fudan University 4Department of Electrical and Computer Engineering, University of Alberta |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. The paper describes the generative process and inference steps textually and mathematically but not in a pseudocode format. |
| Open Source Code | No | No concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) was found for the methodology described in this paper. |
| Open Datasets | Yes | The real-world relational data sets used in this paper are: Coleman [Coleman, 1964], Mining Reality [Eagle and Pentland, 2006], Hypertext [Isella et al., 2011], Infectious [Isella et al., 2011] and Student Net [Fan et al., 2014]. |
| Dataset Splits | No | The paper specifies a 90% training and 10% test split. It does not mention a separate validation split or explicit details for validation data partitioning. |
| Hardware Specification | No | No specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running the experiments were provided. |
| Software Dependencies | No | No specific ancillary software details, such as library or solver names with version numbers, were provided. |
| Experiment Setup | Yes | For the hyperparameters, we specify M Gamma(N, 1) for all data sets, {c(l) c , c(l) u }l, d, dc and Λk1,k2 are all given Gamma(1, 1) priors and L = 3. For MMSB, we set the membership distribution according to Dirichlet(11 K). Each run uses 3000 MCMC iterations with the first 1500 discarded as burn-in. |