Self-Modulating Nonparametric Event-Tensor Factorization

Authors: Zheng Wang, Xinqi Chu, Shandian Zhe

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For evaluation, we examined our method on three real-world datasets. Our model nearly always achieves better predictive performance than the existing methods using Poisson processes, time factors, and Hawkes processes to incorporate temporal information.
Researcher Affiliation Collaboration 1School of Computing, University of Utah 2Xjera Labs, Pte. Ltd.
Pseudocode No While there is an 'Algorithm' section, it describes the algorithm verbally and mathematically rather than providing a structured pseudocode block.
Open Source Code No The paper does not provide any statement or link indicating the availability of its source code.
Open Datasets Yes Taobao (https://tianchi.aliyun.com/dataset/ data Detail?data Id=53), UFO (https://www.kaggle.com/ NUFORC/ufo-sightings/data), Crash (https://www.kaggle.com/ usdot/nhtsa-traffic-fatalities)
Dataset Splits No For training, we used the first 40K, 40K, 20K events from Taobao, UFO and Crash, respectively. The remaining 29.8K, 30.4K, 12K events were used for test. The paper specifies training and test sets but does not mention a validation split.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments (e.g., GPU/CPU models, memory).
Software Dependencies No We implemented our approach and HP-Local with Py Torch and used ADAM (Kingma and Ba, 2014) for stochastic optimization. While software names are mentioned, no version numbers are provided.
Experiment Setup Yes For both methods, the mini-batch size was set to 100. The learning rate was chosen from {5 10 4, 10 3, 3 10 3, 5 10 3, 10 2}. We ran each method for 400 epochs