Perceptual Learned Video Compression with Recurrent Conditional GAN

Authors: Ren Yang, Radu Timofte, Luc Van Gool

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that the learned PLVC model compresses video with good perceptual quality at low bit-rate, and that it outperforms the official HEVC test model (HM 16.20) and the existing learned video compression approaches for several perceptual quality metrics and user studies.
Researcher Affiliation Academia 1ETH Z urich, Switzerland 2Julius Maximilian University of W urzburg, Germany 3KU Leuven, Belgium
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks in the main text.
Open Source Code Yes The project page is available at https://github.com/RenYang-home/PLVC.
Open Datasets Yes Our PLVC model is trained on the Vimeo-90k [Xue et al., 2019] dataset. We use the JCT-VC [Bossen, 2013] (Classes B, C and D) and the UVG [Mercat et al., 2020] datasets as the test sets.
Dataset Splits No The paper states training on Vimeo-90k and testing on JCT-VC and UVG, but does not provide specific details on the training/validation/test splits, percentages, or sample counts for the Vimeo-90k dataset.
Hardware Specification No The paper does not provide specific details on the hardware used for running experiments, such as GPU models, CPU types, or cloud computing specifications.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., PyTorch 1.9, Python 3.8) for replication.
Experiment Setup Yes In (4), α, λ and β are hyper-parameters to control the trade-off of bit-rate, distortion and perceptual quality. We set three target bit-rates RT , and set α = α1 when R(yi) RT , and α = α2 α1 when R(yi) < RT to control bit-rates. The hyper-parameters are shown in Table 1.