Greedy Poisson Rejection Sampling

Authors: Gergely Flamich

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we empirically verify our theorems, demonstrating that GPRS significantly outperforms the current state-of-the-art method, A* coding. Our code is available at https:// github.com/gergely-flamich/greedy-poisson-rejection-sampling.
Researcher Affiliation Academia Gergely Flamich Department of Engineering University of Cambridge gf332@cam.ac.uk
Pseudocode Yes Algorithm 1: Generating a (λ, PX|T )-Poisson process. Algorithm 2: Standard rejection sampler. Algorithm 3: Greedy Poisson rejection sampler. Algorithm 4: Parallel GPRS with J available threads. Algorithm 5: Branch-and-bound GPRS on R with unimodal r Algorithm 6: Branch-and-bound GPRS with splitting function.
Open Source Code Yes Our code is available at https:// github.com/gergely-flamich/greedy-poisson-rejection-sampling.
Open Datasets No The paper does not use a named, publicly available dataset with a link or formal citation. It generates data based on specified distributions (e.g., Gaussian) for its experiments.
Dataset Splits No The paper conducts experiments by simulating data from specified distributions. It does not mention traditional train/validation/test dataset splits as one would for fixed datasets.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes We use a setup similar to the one used by Theis & Yosri (2022). Concretely, we assume the following model for correlated random variables x, µ: Pµ = N(0, 1) Px|µ = N(µ, σ2).