Effective Representing of Information Network by Variational Autoencoder
Authors: Hang Li, Haozheng Wang, Zhenglu Yang, Haochen Liu
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct comprehensive empirical experiments on benchmark datasets and find our model performs better than state-of-the-art techniques by a large margin. |
| Researcher Affiliation | Academia | Hang Li and Haozheng Wang College of Computer and Control Engineering, Nankai University, Tianjin, China {hangl,hzwang}@mail.nankai.edu.cn Zhenglu Yang College of Computer and Control Engineering, Nankai University, Tianjin, China yangzl@nankai.edu.cn Haochen Liu Institute of Statistics, Nankai University, Tianjin, China lhaochen@mail.nankai.edu.cn |
| Pseudocode | Yes | Algorithm 1 Training Algorithm for Our Model |
| Open Source Code | Yes | 5.2 Performance on Node Classification In terms of node classification task, we compare our approach 3 with the following methods: ... 3https://github.com/Algorithm216/RIN/ |
| Open Datasets | Yes | To evaluate the quality of the proposed model, we conduct three important tasks on two benchmark citation network datasets:(1) Citeseer M101. It contains 10 distinct categories with 10,310 papers and 77,218 citations. Titles are treated as the text information because no more text information is available. (2) DBLP dataset2. We treat abstracts as text information and choose 4 research areas with the same setting as that of [Pan et al., 2016]... 1http://citeseerx.ist.psu.edu/ 2http://arnetminer.org/citation (V4 version is used) |
| Dataset Splits | No | The paper mentions 'The proportion of training data with labels is range from 10% to 70%', but does not explicitly state a separate validation split or how it was used. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers used for its implementation or experiments. |
| Experiment Setup | Yes | The reported parameters for our model are set: dimension d=100 on Citeseer M10 and d=300 on DBLP. The dimension for other algorithms is the same as ours, and the other parameters are set as their papers report, i.e., window size b=10 in Deep Walk and Node2vec, in-out parameter q=2 in Node2vec, text weight =0.8 in TADW and Tri DNR. |