Predicting Temporal Sets with Simplified Fully Connected Networks

Authors: Le Yu, Zihang Liu, Tongyu Zhu, Leilei Sun, Bowen Du, Weifeng Lv

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on four benchmarks show the superiority of our approach over the state-of-the-art under both transductive and inductive settings. We also theoretically and empirically demonstrate that our model has lower space and time complexity than baselines. Codes and datasets are available at https://github.com/yule-BUAA/SFCNTSP.
Researcher Affiliation Academia State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China {yule,lzhmark,zhutongyu,leileisun,dubowen,lwf}@buaa.edu.cn
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Codes and datasets are available at https://github.com/yule-BUAA/SFCNTSP.
Open Datasets Yes Descriptions of Benchmarks Following Yu et al. (2022), we use four benchmarks in the experiments, including Jing Dong, DC, Tao Bao and TMS. Jing Dong3 records the actions of users about purchasing, browsing, following, commenting, and adding products to shopping carts. 3https://jdata.jd.com/html/detail.html?id=8 Dunnhumby-Carbo (DC)4 includes the transactions of households at a retailer in two years. 4https://www.dunnhumby.com/careers/engineering/sourcefiles Tao Bao5 contains the online user behaviors about purchasing, clicking, marking products as favors, and adding products to shopping carts. 5https://tianchi.aliyun.com/dataset/data Detail?data Id=649 Tags-Math-Sx (TMS)6 contains the history of users questions in Mathematics Stack Exchange and we use the preprocessed version in Yu et al. (2022) in experiments. 6https://math.stackexchange.com
Dataset Splits Yes For the transductive setting, we follow Yu et al. (2022) to use the last set, the second last set, and the remaining sets of each user for testing, validation, and training. For the inductive setting, we follow Yu et al. (2020) to randomly split each dataset across users with the ratio of 70%, 10%, and 20% for training, validation, and testing.
Hardware Specification Yes We conduct the experiments on an Ubuntu machine equipped with one Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz with 16 CPU cores. The GPU device is NVIDIA Ge Force RTX 3090 with 24 GB memory.
Software Dependencies No Our model is implemented by Py Torch (Paszke et al. 2019). While PyTorch is mentioned, no specific version number is provided for PyTorch or any other software dependency.
Experiment Setup Yes We set the learning rate and batch size to 0.001 and 64 on all the datasets. We search the dropout rate and the number of embedding channels c in [0.0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3] and [32, 64, 128]. For hyperparameters α and β, we set α to 1.0 to represent the residual connection and search β in [0.0, 0.01, 0.1, 0.3, 0.5]. The configurations of our model under the transductive setting are shown in Table 3.