Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Deep Relational Topic Modeling via Graph Poisson Gamma Belief Network
Authors: Chaojie Wang, Hao Zhang, Bo Chen, Dongsheng Wang, Zhengjue Wang, Mingyuan Zhou
NeurIPS 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our models extract high-quality hierarchical latent document representations, leading to improved performance over baselines on various graph analytic tasks. |
| Researcher Affiliation | Academia | Chaojie Wang , Hao Zhang , Bo Chen , Dongsheng Wang, Zhengjue Wang National Laboratory of Radar Signal Processing Xidian University, Xi an, Shaanxi 710071, China EMAIL, EMAIL, EMAIL EMAIL, EMAIL Mingyuan Zhou Mc Combs School of Business The University of Texas at Austin Austin, TX 78712, USA EMAIL |
| Pseudocode | Yes | The detailed training algorithm is provided in Appendix C, and the released code3 is implemented with Tensor Flow [47], combined with py CUDA [48] for parallel Gibbs sampling. |
| Open Source Code | Yes | The detailed training algorithm is provided in Appendix C, and the released code3 is implemented with Tensor Flow [47], combined with py CUDA [48] for parallel Gibbs sampling. |
| Open Datasets | Yes | We consider six widely used benchmarks, including Coil [5], TREC [43], and R8 [49] for node clustering, and Cora, Citeseer and Pubmed [50] for link prediction and node classification. |
| Dataset Splits | Yes | Following VGAE [23], we train the model on an incomplete version of the network data, with 5% and 10% of the citation links used for validation and test, respectively. |
| Hardware Specification | No | The paper mentions acceleration with 'GPU' but does not provide specific hardware details such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The released code is implemented with Tensor Flow [47], combined with py CUDA [48] for parallel Gibbs sampling. |
| Experiment Setup | Yes | We perform three WGAEs/WGCAEs with different stochastic layers, i.e., T {1, 2, 3}, and set the network structure as K1 = K2 = K3 = C, where C is set as the total number of classes for node clustering/classification, and 16 for link prediction following VGAE [23] to make a fair comparision. In Fig. 4, we can see that the best performance of node clustering and link prediction are achieved around β = 0.1 and β = 100, respectively. |