Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Geometric Hawkes Processes with Graph Convolutional Recurrent Neural Networks
Authors: Jin Shang, Mingxuan Sun4878-4885
AAAI 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiment results on real-world data show that our framework outperforms recent state-of-art methods. |
| Researcher Affiliation | Academia | Jin Shang, Mingxuan Sun Division of Computer Science and Engineering Louisiana State University EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Algorithm for Learning single-graph GHP |
| Open Source Code | No | No explicit statement or link providing concrete access to the source code for the methodology described in this paper. |
| Open Datasets | Yes | We evaluate our model on three real world datasets which contain temporal interactions between a set of users and a set of items. Specifically, the IPTV dataset (Xu, Farajtabar, and Zha 2016)... The Yelp1 dataset is available from Yelp dataset challenge... The Reddit2 dataset contains the time of posting discussions between random selected 1000 users and 1403 threads in January 2014. 1https://www.yelp.com/dataset/challenge 2https://dynamics.cs.washington.edu/data.html |
| Dataset Splits | No | In the experiments, we use the events before time T p as the training data, and the rest of them as testing data, where T is the length of the total time, and p = 0.76 is the proportion where we split the data. No explicit mention of a separate validation split was found. |
| Hardware Specification | No | No specific hardware details (GPU/CPU models, memory, or detailed computer specifications) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies, libraries, or solvers with version numbers (e.g., Python 3.8, PyTorch 1.9) used in the experiments were provided. |
| Experiment Setup | Yes | In the experiments, we use the events before time T p as the training data, and the rest of them as testing data, where T is the length of the total time, and p = 0.76 is the proportion where we split the data. The results show that k = 10 is the best for IPTV dataset. In our experiment, we found the structure of two GCN layers plus one LSTM layer works best. |