Motif-Preserving Temporal Network Embedding
Authors: Hong Huang, Zixuan Fang, Xiao Wang, Youshan Miao, Hai Jin
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on various real-world temporal networks demonstrate that, compared with several state-of-the-art methods, our model achieves the best performance in both static and dynamic tasks, including node classification, link prediction, and link recommendation. |
| Researcher Affiliation | Collaboration | Hong Huang1,2,3 , Zixuan Fang1,2,3 , Xiao Wang4 , Youshan Miao5 and Hai Jin1,2,3 1National Engineering Research Center for Big Data Technology and System 2Service Computing Technology and Systems Laboratory 3School of Computer Science and Technology, Huazhong University of Science and Technology, China 4Beijing University of Posts and Telecommunications, China 5Microsoft Research Asia |
| Pseudocode | No | The paper describes the model mathematically and textually but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about the release of its source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | Datasets. We test MTNE on five different real-world datasets, say School [Fournet and Barrat, 2014], DBLP [Ley, 2009], Digg [Hogg and Lerman, 2012], Mobile [Huang et al., 2018], and Weibo [Zhang et al., 2013]. Their statistics is listed in Table 1. |
| Dataset Splits | No | For the temporal recommendation task, the paper mentions using 'data from the first 80% of the period as a training set the rest as a testing set.' However, it does not explicitly mention a separate validation set or a three-way split (train/validation/test) for hyperparameter tuning or early stopping, nor general splits for other tasks like node classification or link prediction. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | The paper mentions optimization techniques and classifiers (e.g., SGD, Logistic Regression) but does not specify any software packages or libraries with version numbers required for replication. |
| Experiment Setup | Yes | Parameter settings. For all methods, the embedding dimension d is set as 64. For our proposed MTNE, the batch size, the learning rate of the SGD, the number of candidate temporal triads, and the number of negative samples are set to be 1000, 0.003, 5, 5, respectively, while for other baselines, we use the default parameters settings. |