Multi-Modal Bayesian Embeddings for Learning Social Knowledge Graphs
Authors: Zhilin Yang, Jie Tang, William Cohen
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three datasets show that the proposed method clearly outperforms state-of-the-art methods. We then deploy the method on AMiner, an online academic search system to connect with a network of 38,049,189 researchers with a knowledge base with 35,415,011 concepts. Our method significantly decreases the error rate of learning social knowledge graphs in an online A/B test with live users. |
| Researcher Affiliation | Academia | Zhilin Yang Jie Tang William Cohen Tsinghua University Carnegie Mellon University jietang@tsinghua.edu.cn {zhiliny,wcohen}@cs.cmu.edu |
| Pseudocode | Yes | Algorithm 1: Model Inference |
| Open Source Code | No | The paper does not provide a specific link or explicit statement indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We deploy our algorithm and run the experiments on AMiner1, an online academic search system [Tang et al., 2008]. ... We use the publicly available English Wikipedia as the knowledge base Gk. ... We use the full-text Wikipedia corpus2 as the text information C to learn the knowledge concept embeddings. Footnote 1: https://aminer.org/ Footnote 2: https://dumps.wikimedia.org/enwiki/latest/ |
| Dataset Splits | Yes | Instead, we consider two strategies offline evaluation on three data mining tasks and an online A/B test with live users. ... For each researcher, we first compute the top 10 research interests provided by the two algorithms. Then we randomly select 3 research interests from each algorithm, and merge the selected research interests in a random order. When a user visits the profile page of a researcher, a questionnaire is displayed on top of the profile. ... We collect 110 questionnaires in total, and use them as ground truth to evaluate the algorithms. |
| Hardware Specification | Yes | The experiments were run on Intel(R) Xeon(R) CPU E5-4650 0 @ 2.70GHz with 64 threads. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers (e.g., Python version, specific library versions like TensorFlow, PyTorch, or scikit-learn versions). |
| Experiment Setup | Yes | We empirically set µ0 = 0, 0 = 1E-5, β0 = 1, 0 = 1E3, T = 200, = 0.25. |