Temporal Heterogeneous Information Network Embedding
Authors: Hong Huang, Ruize Shi, Wei Zhou, Xiao Wang, Hai Jin, Xiaoming Fu
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive evaluations with various real-world temporal HINs demonstrate that THINE achieves the SOTA performance in both static and dynamic tasks, including node classification, link prediction, and temporal link recommendation. |
| Researcher Affiliation | Academia | Hong Huang1,2,3 , Ruize Shi1,2,3 , Wei Zhou3 , Xiao Wang4 , Hai Jin1,2,3 and Xiaoming Fu5 1National Engineering Research Center for Big Data Technology and System 2Service Computing Technology and Systems Laboratory 3Huazhong University of Science and Technology, China 4Beijing University of Posts and Telecommunications, China 5University of Goettingen, Germany |
| Pseudocode | No | The paper describes the model in detail with mathematical formulas and conceptual diagrams (Figure 1), but it does not include a block labeled 'Pseudocode' or 'Algorithm'. |
| Open Source Code | No | The paper does not include a statement about releasing open-source code for its methodology or a link to a code repository. |
| Open Datasets | Yes | Datasets. In order to demonstrate the effectiveness of THINE, we evaluate it on three real-world datasets. They are Aminer [Tang et al., 2008], DBLP1, and Yelp2, respectively. The statistics of these datasets are shown in Table 1. 1https://dblp.org 2https://www.yelp.com/dataset |
| Dataset Splits | No | Especially, the size of the training set is set as 60%, 80% and the remaining nodes as the test set. For each dataset, the first 80% of the period is used for training while remaining as a test set. (The paper only specifies train/test splits, not a separate validation split.) |
| Hardware Specification | Yes | We evaluate THINE and other baselines on a server with Intel Xeon CPU E5-2680, Tesla V100 GPUs, and 250GB Memory. |
| Software Dependencies | Yes | The experimental environment of software is Ubuntu 18.04 with CUDA 10.2. |
| Experiment Setup | Yes | To be fair, the embedding dimension d is set as 100 for all methods. For THINE, the learning rate of Adam is set as 0.003 while the batch size is 500. Moreover, we set the number of walks per node as 10, the walk length is 30, the number n of candidate meta-path instances is 20, the number z of candidate edges is 4, and the negative samples is set to be 5. |