Improving Textual Network Learning with Variational Homophilic Embeddings
Authors: Wenlin Wang, Chenyang Tao, Zhe Gan, Guoyin Wang, Liqun Chen, Xinyuan Zhang, Ruiyi Zhang, Qian Yang, Ricardo Henao, Lawrence Carin
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on real-world networks, for multiple tasks, demonstrate that the proposed method consistently achieves superior performance relative to competing state-of-the-art approaches. |
| Researcher Affiliation | Collaboration | 1Duke University, 2Microsoft Dynamics 365 AI Research |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available from https://github.com/Wenlin-Wang/VHE19. |
| Open Datasets | Yes | Datasets Following [40], we consider three widely studied real-world network datasets: CORA [28], HEPTH [25], and ZHIHU1. |
| Dataset Splits | No | The paper describes "various ratios of observed edges are used for training and the rest are used for testing" for link prediction and "% of Labeled Data" for vertex classification, but does not explicitly mention a distinct validation set or its split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models or cloud instance specifications) used for running experiments. |
| Software Dependencies | No | The paper mentions "a linear SVM [14]" but does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | No | The paper states "Details of the experimental setup are found in the SM" but does not provide specific hyperparameter values or system-level training settings in the main text. |