Diffusion Maps for Textual Network Embedding
Authors: Xinyuan Zhang, Yitong Li, Dinghan Shen, Lawrence Carin
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that the proposed approach outperforms state-of-the-art methods on the vertex-classification and link-prediction tasks. |
| Researcher Affiliation | Academia | Department of Electrical and Computer Engineering Duke University Durham, NC 27707 |
| Pseudocode | No | No explicit pseudocode block or section labeled 'Algorithm' was found in the paper. |
| Open Source Code | No | The paper does not provide an explicit statement about open-source code availability nor does it include a link to a code repository. |
| Open Datasets | Yes | We conduct experiments on three real-world datasets: DBLP [28], Cora [15], and Zhihu [26]. |
| Dataset Splits | No | The paper describes training and testing splits for link prediction and multi-label classification but does not explicitly mention a separate 'validation' set or its specific use for hyperparameter tuning. |
| Hardware Specification | Yes | All models are implemented in Tensorflow using a NVIDIA Titan X GPU with 12 GB memory. |
| Software Dependencies | No | The paper mentions 'TensorFlow' but does not specify its version number or any other software dependencies with version details. |
| Experiment Setup | Yes | We set the embedding of dimension d to 200 with ds and dt both equal to 100. The number of hops H is set to 4 and the importance coefficients λh s are tuned for different datasets and different tasks with λ0 > λ1 > > λH. αtt, αss, αts, and αst are set to 1, 1, 0.3 and 0.3 respectively. The number of negative samples K is set to 1 to speed up the training process. |