Variational Autoencoders for Highly Multivariate Spatial Point Processes Intensities
Authors: Baichuan Yuan, Xiaowei Wang, Jianxin Ma, Chang Zhou, Andrea L. Bertozzi, Hongxia Yang
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show the method s utility on both synthetic data and real-world data sets. |
| Researcher Affiliation | Collaboration | Baichuan Yuan1, Xiaowei Wang2, Jianxin Ma2, Chang Zhou2, Andrea L. Bertozzi1, Hongxia Yang2 1Department of Mathematics, University of California, Los Angeles 2DAMO Academy, Alibaba Group |
| Pseudocode | Yes | Algorithm 1: Training VAE SPP with stochastic gradient descent. |
| Open Source Code | No | The paper states 'We implement our models in Tensorflow based on VAE-CF2', with footnote 2 pointing to 'https://github.com/dawenl/vae_cf'. This link is to a third-party VAE-CF implementation, not the authors' own VAE-SPP code or an explicit statement of its release. |
| Open Datasets | Yes | We consider the Gowalla data set (Cho et al., 2011) in New York City (NYC) and California (CA). ... Movie Lens data sets (ML-100K and ML-1M) include the movie (item) rating by users |
| Dataset Splits | Yes | We split the data into training, validation and testing sets. ... We randomly select 500 users as the validation set and 500 users as the testing set. ... We set the size of both validation and testing sets to 100. |
| Hardware Specification | Yes | We conducted the experiments on a single GTX 1080 TI 11GB GPU. |
| Software Dependencies | No | The paper mentions 'Tensorflow', 'python statsmodel', and 'GPy' but does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | Yes | For simulation data, we train both models for 200 epochs using Adam optimizer with β = 0.2, lr = 5 10 5. We use mini-batches of size 20. Our architectures consist of a one layer MLP with K = 50. For VAE-SPP, σ2 = 0.001. |