Poisson-Randomised DirBN: Large Mutation is Needed in Dirichlet Belief Networks
Authors: Xuhui Fan, Bin Li, Yaqiong Li, Scott A. Sisson
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply Pois-Dir BN to relational modelling and validate its effectiveness through improved link prediction performance and more interpretable latent distribution visualisations. The performance of the Pois-Dir BN is evaluated in the relational modeling setting. |
| Researcher Affiliation | Academia | 1UNSW Data Science Hub, and School of Mathematics and Statistics, University of New South Wales 2Shanghai Key Laboratory of IIP, School of Computer Science, Fudan University 3Australian Artificial Intelligence Institute, University of Technology, Sydney. |
| Pseudocode | No | The paper describes the inference algorithm in text but does not provide structured pseudocode or an algorithm block. |
| Open Source Code | Yes | The code can be downloaded at https: //github.com/xuhuifan/Pois_Dir BN. |
| Open Datasets | Yes | We examine four real-world datasets: three standard citation networks (Citeer, Cora, Pubmed (Sen et al., 2008) and one protein-to-protein interaction network (PPI) (Zitnik & Leskove, 2017)). |
| Dataset Splits | No | Unless specified, we are using 90% (per row) of the relational data as training data and the remaining 10% as test data. No explicit validation split information is provided. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers for its implementation or any third-party libraries used. |
| Experiment Setup | Yes | For hyper-parameters, we set r0, c0, ξ, η Gam(1, 1), M Gam(100, 1) for all datasets. Each run uses 2 000 MCMC iterations with the first 1 000 discarded as burn-in and the mean values of the second 1 000 posterior samples performance score are reported. |