Isotonic Hawkes Processes

Authors: Yichen Wang, Bo Xie, Nan Du, Le Song

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of Isotonic-Hawkes on both synthetic and real-world datasets with respect to the following tasks : Convergence: investigate how well Isotonic-Hawkes can learn the true parameters as the number of training samples increases. Fitting capability: study how well Isotonic-Hawkes can explain real-world data by comparing it with the classic Hawkes process. Time-sensitive recommendation: demonstrate that Isotonic Hawkes can improve the predictive performance in item recommendation and time prediction. Diffusion network modeling: evaluate how well Isotonic-Hawkes can model the information diffusion from cascades of temporal events. 6.1. Experiments on Synthetic Data 6.2. Experiments on Time-sensitive Recommendation 6.3. Experiments on Modeling Diffusion Networks
Researcher Affiliation Academia Yichen Wang, Bo Xie, Nan Du {YICHEN.WANG, BO.XIE, DUNAN}@GATECH.EDU Le Song LSONG@CC.GATECH.EDU College of Computing, Georgia Institute of Technology, 266 Ferst Drive, Atlanta, GA 30332 USA
Pseudocode Yes Algorithm 1 COMPUTE-COEFFICIENT; Algorithm 2 LEARN-ISOTONIC-FUNC; Algorithm 3 ISOTONIC-HAWKES
Open Source Code No The paper does not contain any explicit statement about releasing the source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes last.fm1 consists of the music listening histories of around 1,000 users over 3,000 different albums. We use the events of the first three months for training and those of the next month for testing. ... 1http://www.dtic.upf.edu/ ocelma/ Music Recommendation Dataset/lastfm-1K.html
Dataset Splits No The paper specifies training and testing periods for its datasets (e.g., "We use the events of the first three months for training and those of the next month for testing" for last.fm, and similarly for tmall.com). However, it does not mention a separate validation set or provide details on how a validation set was used for hyperparameter tuning or model selection.
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies or library versions used for implementation or experimentation (e.g., Python, PyTorch, TensorFlow versions, or other specific libraries with their versions).
Experiment Setup No The paper mentions that "The latent rank of the low-rank Isotonic-Hawkes process and the tensor method are tuned to give the best performance." but does not provide any specific hyperparameter values (e.g., learning rate, batch size, number of epochs, specific optimizer settings) or other detailed system-level training configurations used in the experiments.