Discriminative Nonparametric Latent Feature Relational Models with Data Augmentation
Authors: Bei Chen, Ning Chen, Jun Zhu, Jiaming Song, Bo Zhang
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive studies on various real networks show promising performance. We present experimental results to demonstrate the effectiveness of DLFRM on five real datasets as summarized in Table 1 |
| Researcher Affiliation | Academia | Dept. of Comp. Sci. & Tech., State Key Lab of Intell. Tech. & Sys., Center for Bio-Inspired Computing Research, MOE Key lab of Bioinformatics, Bioinformatics Division and Center for Synthetic & Systems Biology, TNList, Tsinghua University, Beijing, 100084, China {chenbei12@mails., ningchen@, dcszj@, sjm12@mails., dcszb@}tsinghua.edu.cn |
| Pseudocode | Yes | Algorithm 1 Gibbs sampler for DLFRM |
| Open Source Code | No | The paper mentions 'Supplemental Material: http://bigml.cs.tsinghua.edu.cn/~beichen/pub/DLFRM2.pdf' in a footnote, but this links to a PDF document, not a source code repository, nor does the text explicitly state that the source code for the methodology is available. |
| Open Datasets | Yes | We present experimental results to demonstrate the effectiveness of DLFRM on five real datasets as summarized in Table 1, where NIPS contains 234 authors... Kinship includes 26 relationships... Web KB contains 877 webpages... Astro Ph contains collaborations between 17, 903 authors... Gowalla contains 196, 591 people and their friendships... (Leskovec, Kleinberg, and Faloutsos 2007), (Cho, Myers, and Leskovec 2011) |
| Dataset Splits | No | We randomly select a development set from training set with almost the same number of links as testing set and choose the proper hyper-parameters, which are insensitive in a wide range. While train/test splits are quantified (e.g., 80/20 or 90/10), the 'development set' (validation) lacks a specific percentage or count, making its exact split irreproducible. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as CPU or GPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions 'We use SVM-Light (Joachims 1998) to train these classifiers;' but does not provide specific version numbers for this or any other software dependency. |
| Experiment Setup | Yes | We randomly select a development set from training set... and choose the proper hyper-parameters... the stepsizes are set by ϵt = a(b + t) γ for log-loss and Ada Grad (Duchi, Hazan, and Singer 2011) for hinge loss; We set c+ = 10c = c as in (Zhu 2012), where c+ is the regularization parameter for positive links and c for negative links. |