Deep Generative Video Compression

Authors: Salvator Lombardo, JUN HAN, Christopher Schroers, Stephan Mandt

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Rate-distortion evaluations on small videos from public data sets with varying complexity and diversity show that our model yields competitive results when trained on generic video content.
Researcher Affiliation Collaboration Jun Han Dartmouth College junhan@cs.dartmouth.edu Salvator Lombardo Disney Research LA salvator.d.lombardo@disney.com Christopher Schroers Disney Research|Studios christopher.schroers@disney.com Stephan Mandt University of California, Irvine mandt@uci.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes 1) Sprites. The simplest dataset consists of videos of Sprites characters from an open-source video game project, which is used in (Reed et al., 2015; Mathieu et al., 2016; Li & Mandt, 2018). ... 2) BAIR. BAIR robot pushing dataset (Ebert et al., 2017) consists of a robot pushing objects on a table, which is also used in (Babaeizadeh et al., 2018; Denton & Fergus, 2018; Lee et al., 2018). ... 3) Kinetics600. The last dataset is the Kinetics600 dataset (Kay et al., 2017) which is a diverse set of You Tube videos depicting human actions.
Dataset Splits No The paper does not provide specific dataset split information for training, validation, or testing.
Hardware Specification No No specific hardware details (like GPU/CPU models or types) used for running experiments were mentioned in the paper.
Software Dependencies No The paper mentions "open source FFMPEG implementation" but does not provide specific version numbers for FFMPEG or any other ancillary software dependencies.
Experiment Setup Yes Our curves are generated by varying β (Eq. 7).