Exact, Fast and Expressive Poisson Point Processes via Squared Neural Families

Authors: Russell Tsuchida, Cheng Soon Ong, Dino Sejdinovic

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate SNEPPPs on real, and synthetic benchmarks, and provide a software implementation.We empirically demonstrate the efficacy and efficiency of our open source method on a large scale case study of wildfire data from NASA with about 100 million events.
Researcher Affiliation Collaboration Russell Tsuchida1, Cheng Soon Ong1, 2, Dino Sejdinovic3 1Data61-CSIRO 2Australian National University 3University of Adelaide
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper states it provides a “software implementation” and describes its method as “open source”, but it does not provide a specific repository link, explicit code release statement (e.g., ‘code available at URL’), or mention of code in supplementary materials.
Open Datasets Yes Using NASA wildfire data (right) (NASA FIRMS 2023).We use a massive freely available dataset (NASA FIRMS 2023).We also perform benchmarks on three real datasets bei, copper and clmfires also considered by Kim, Asami, and Toda (2022).
Dataset Splits No The paper mentions data splitting by subsampling and thinning (“we split the original dataset into disjoint training and test sets by subsampling”, “artificially but exactly obtain multiple independent realisations from a single realisation of data by a process called splitting or thinning”), but does not provide specific percentages, sample counts, or detailed methodology for these splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or processor types) used for running its experiments.
Software Dependencies No The paper mentions using a “TensorFlow library” and “Adam” for optimization but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup No The paper mentions using “Adam to optimise parameters” and suggests a learning rate based on a theoretical β, but it does not provide specific hyperparameter values (e.g., Adam’s learning rate, batch size, number of epochs) or other concrete system-level training settings.