Learning Parametric Models for Social Infectivity in Multi-Dimensional Hawkes Processes
Authors: Liangda Li, Hongyuan Zha
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments We conducted experiments on both synthetic and real-world data sets , and compared the performance of our model with alternatives to demonstrate the effectiveness of our model. |
| Researcher Affiliation | Academia | Liangda Li1 and Hongyuan Zha2,1 1College of Computing, Georgia Institute of Technology, Atlanta, GA, USA 2Software Engineering Institute, East China Normal University, Shanghai, China |
| Pseudocode | No | The paper describes the optimization steps using mathematical formulas and text, but no structured pseudocode or algorithm blocks are provided. |
| Open Source Code | No | The paper does not provide any explicit statements about open-sourcing the code for the methodology described, nor does it include a link to a code repository. |
| Open Datasets | No | The paper mentions using 'Retweets' and 'Meme Tracker' datasets, but it does not provide concrete access information such as specific links, DOIs, or formal citations (with authors and year for the dataset) to make these datasets publicly available or open for reproduction. |
| Dataset Splits | No | The paper mentions 'training data' and 'predictive likelihood on events falling in the final 10% of the total time of each event cascade,' indicating a test split. However, it does not provide specific train/validation/test dataset splits with percentages, sample counts, or explicit mentions of a separate validation set. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions the use of 'alternating direction method of multipliers (ADMM)' and 'Majorize-Minimization (MM)' techniques but does not specify any software names with version numbers or library dependencies used for implementation. |
| Experiment Setup | No | The paper mentions 'λ is the regularization parameter that trades off the sparsity of the coefficients and the data likelihood' but does not provide concrete values for hyperparameters like λ, learning rates, batch sizes, or other specific training configurations for their model. |