A Continuous Time Framework for Discrete Denoising Models

Authors: Andrew Campbell, Joe Benton, Valentin De Bortoli, Thomas Rainforth, George Deligiannidis, Arnaud Doucet

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate our proposed method on the generative modeling of images from the CIFAR-10 dataset and monophonic music sequences. Notably, we find our tau-leaping with predictor-corrector sampler can provide higher quality CIFAR10 samples than previous discrete time discrete state approaches, further closing the performance gap between when images are modeled as discrete data or as continuous data.
Researcher Affiliation Academia 1Department of Statistics, University of Oxford, UK 2CNRS ENS Ulm, Paris, France
Pseudocode Yes We refer to our method of using tau-leaping to simulate the reverse CTMC as τLDR (tau-leaping denoising reversal) which we formalize in Algorithm 1 in Appendix F.
Open Source Code Yes The code is available at https://github.com/andrew-cr/tauLDR
Open Datasets Yes We demonstrate our proposed method on the generative modeling of images from the CIFAR-10 dataset and monophonic music sequences. [...] We model songs from the Lakh pianoroll dataset [29, 30].
Dataset Splits No The paper mentions training for a number of epochs and batch sizes, but does not provide specific training/validation/test dataset splits, percentages, or references to predefined splits for reproduction.
Hardware Specification Yes This project made use of time on Tier 2 HPC facility JADE2, funded by EPSRC (EP/T022205/1).
Software Dependencies No The paper mentions using 'PyTorch' for implementation but does not specify any version numbers for PyTorch or other software dependencies.
Experiment Setup Yes The model is trained for 200 epochs using Adam with a learning rate of 0.0001 (0.00005 for the final 50 epochs) with a batch size of 128.