Thinning for Accelerating the Learning of Point Processes
Authors: Tianbo Li, Yiping Ke
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on synthetic and real-world datasets validate the effectiveness of thinning in the tasks of parameter and gradient estimation, as well as stochastic optimization. |
| Researcher Affiliation | Academia | Tianbo Li, Yiping Ke School of Computer Science and Engineering Nanyang Technological University, Singapore tianbo001@e.ntu.edu.sg, ypke@ntu.edu.sg |
| Pseudocode | Yes | Algorithm 1: TSGD: Thinning Stochastic Gradient Descent |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. |
| Open Datasets | Yes | IPTV dataset [24]: The dataset consists of IPTV viewing events... NYC Taxi dataset: The data is from The New York City Taxi and Limousine Commission1... Weeplace dataset [23]: This dataset contains the check-in histories of users at different locations. |
| Dataset Splits | No | The paper mentions training and test datasets, but does not explicitly provide details on a validation set or cross-validation strategy. |
| Hardware Specification | Yes | All the experiments were conducted on a server with Intel Xeon CPU E5-2680 (2.80GHz) and 250GB RAM. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | We ran each method on each dataset for 10 times. For each dataset, we perform LSE with different histories: full data and p-thinned data with p = 0.2 and p = 0.5. |