Dynamic Group Link Prediction in Continuous-Time Interaction Network
Authors: Shijie Luo, He Li, Jianbin Huang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on various datasets with and without unseen nodes show that CTGLP outperforms the state-of-the-art methods by 13.4% and 13.2% on average. |
| Researcher Affiliation | Academia | Shijie Luo , He Li and Jianbin Huang Xidian University sjluo@stu.xidian.edu.cn, {heli, jbhuang}@xidian.edu.cn |
| Pseudocode | Yes | Algorithm 1 Training of CTGLP |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | Movie Lens-100K (ML100K for short) [Harper and Konstan, 2015] and Movie Lens-25M (ML25M for short) [Harper and Konstan, 2015] contains rating data from users on movies. Ciao DVD [Guo et al., 2014] consists of DVD rating data. |
| Dataset Splits | Yes | For each dataset, we split it into 8:1:1 for training, validation and testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU/CPU models or memory. |
| Software Dependencies | Yes | We implement our CTGLP with Py Torch 1.6.0 and adopt the SGD as the optimizer. |
| Experiment Setup | Yes | The dimension D of initial embeddings, the dimension d of hidden states and the dimension s of group vectors are all tested in {16, 32, 64, 128, 256, 512}. The batch size and learning rate are searched in {32, 64, 128, 256} and {0.005, 0.01, 0.05, 0.1} respectively. Two convolutional layers are employed in CTGNN, and the neighbor sampling sizes are empirically set to 25 and 10 respectively. |