Is Score Matching Suitable for Estimating Point Processes?
Authors: Haoqun Cao, Zizhuo Meng, Tianjun Ke, Feng Zhou
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we validate our proposed (A)WSM on parametric or deep point process models. For parametric models, we focus on verifying whether (A)WSM can accurately recover the ground-truth parameters. For deep point process models, we confirm that our new training method is also applicable to deep neural network models. |
| Researcher Affiliation | Academia | Haoqun Cao1, Zizhuo Meng2, Tianjun Ke1, Feng Zhou1,3 1Center for Applied Statistics and School of Statistics, Renmin University of China 2Data Science Institute, University of Technology Sydney 3Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Codes are publicly available at https://github.com/Ken Cao2007/WSM_TPP. |
| Open Datasets | Yes | Stack Overflow [7]: This dataset has two years of user awards on Stack Overflow. Each user received a sequence of badges and there are K = 22 kinds of badges. [7] Leskovec Jure. Snap datasets: Stanford large network dataset collection. Retrieved December 2021 from http://snap. stanford. edu/data, 2014. |
| Dataset Splits | Yes | For each dataset, we follow the default training/dev/testing split in the repository. |
| Hardware Specification | Yes | Experiments are performed using an NVIDIA A16 GPU, 15GB memory. |
| Software Dependencies | No | The paper mentions 'Adam [8] as the optimizer' but does not specify version numbers for any software dependencies like Python, PyTorch, or other libraries. |
| Experiment Setup | Yes | We run 500 iterations of gradient descent using Adam [8] as the optimizer for all scenarios. For MLE, the intensity integral is computed through numerical integration, with the number of integration nodes set to 100 to achieve a considerable level of accuracy. (Section 6.2 Training Protocol) and Table 4 provides specific hyperparameters for training, such as EPOCHS, αAWSM, TRUNC, αDSM, and σDSM. |