A Variational Point Process Model for Social Event Sequences
Authors: Zhen Pan, Zhenya Huang, Defu Lian, Enhong Chen173-180
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on real-world datasets prove effectiveness of our proposed model. |
| Researcher Affiliation | Academia | 1Anhui Province Key Laboratory of Big Data Analysis and Application, School of Computer Science and Technology, University of Science and Technology of China {pzhen, huangzhy}@mail.ustc.edu.cn, {liandefu, cheneh}@ustc.edu.cn |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Retweets Dataset (Zhao et al. 2015) and Meme Track Dataset (Leskovec, Backstrom, and Kleinberg 2009). |
| Dataset Splits | No | The paper mentions 'We randomly sampled disjoint train and test sets with 20,000 and 2,000 sequences respectively' for the Retweets dataset and similarly for the Meme Track dataset, and 'We split both datasets into training and test sets containing 70% and 30% of samples respectively.' While it specifies train and test splits, it does not explicitly mention a separate validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper states, 'The models are implemented with Tensor Flow (Abadi et al. 2016)'. While TensorFlow is mentioned, a specific version number is not provided, nor are other software dependencies with versions. |
| Experiment Setup | Yes | Numbers of hidden nodes of LSTMs for Retweets and Meme Track datasets are 256 and 64, respectively. Networks are 2-layer MLPs, with Re LU activation after the first layer. Dimension of the latent code is 256. Event decoder is a 3-layer MLP... The models are implemented with Tensor Flow (Abadi et al. 2016) and are trained using the Adam (Kingma and Ba 2015) optimizer for 1,000 epochs with batch size 32 and learning rate 0.001. |