Deep Amortized Relational Model with Group-Wise Hierarchical Generative Process

Authors: Huafeng Liu, Tong Zhou, Jiaqi Wang7550-7557

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A series of experiments have been conducted on both synthetic and real-world datasets. The experimental results demonstrated that Da RM can obtain high performance on both community detection and link prediction tasks.
Researcher Affiliation Academia 1Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, Beijing, China 2 Department of Mathematics, The University of Hong Kong, Hong Kong SAR, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes Dataset Several widely known citation datasets are used, namely, NIPS12 [Globerson et al. 2007], Cora, Cite Seer and Pubmed [Rossi and Ahmed 2015].
Dataset Splits Yes For link prediction task, we hold out 10% and 5% of the links as our test set and validation set, respectively, and use the validation set to fine-tune the hyperparameters.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes The hyper-parameter τ scales the similarity from [ 1, 1] to [ 1/τ, 1/τ], which is set as τ = 0.1 to obtain a more skewed distribution. Note that σ0 should be set to a small value, e.g., around 0.1, since the learned representations are well normalized. We take the average of AUC scores by running model on 10 random split of dataset.