End-to-end Optimized Image Compression

Authors: Johannes Ballé, Valero Laparra, Eero P. Simoncelli

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We jointly optimize the entire model for rate distortion performance over a database of training images... Across an independent set of test images, we find that the optimized method generally exhibits better rate distortion performance than the standard JPEG and JPEG 2000 compression methods.
Researcher Affiliation Academia Johannes Ball e Center for Neural Science New York University New York, NY 10003, USA johannes.balle@nyu.edu Valero Laparra Image Processing Laboratory Universitat de Val encia 46980 Paterna, Spain valero.laparra@uv.es Eero P. Simoncelli Center for Neural Science and Courant Institute of Mathematical Sciences New York University New York, NY 10003, USA eero.simoncelli@nyu.edu
Pseudocode No The paper describes its transforms using mathematical equations and includes a binarization flowchart (Figure 9), but it does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper provides a link to experimental results (compressed images and rate-distortion curves) but does not provide a link or explicit statement about the availability of the source code for the methodology itself.
Open Datasets Yes We jointly optimized the full set of parameters φ, θ, and all ψ over a subset of the Image Net database (Deng et al., 2009) consisting of 6507 images using stochastic descent.
Dataset Splits No The paper states it optimized over a subset of the ImageNet database and evaluated on the Kodak dataset, noting test images were not included in the training set. However, it does not specify explicit percentages or sample counts for training, validation, and test splits (e.g., 80/10/10) or provide details on how validation was performed within the training process.
Hardware Specification No The paper does not specify any particular hardware components such as GPU models, CPU types, or cloud computing instances used for the experiments.
Software Dependencies No The paper mentions using the Adam optimization algorithm and the CABAC framework, but it does not provide specific version numbers for any software libraries, programming languages, or tools used in the implementation.
Experiment Setup Yes We used the Adam optimization algorithm (Kingma and Ba, 2014) to obtain values for the parameters φ and θ, starting with α = 10 4, and subsequently lowering it by a factor of 10 whenever the improvement of both rate and distortion stagnated, until α = 10 7. For the grayscale analysis transform, we used 128 filters (size 9 9) in the first stage, each subsampled by a factor of 4 vertically and horizontally. ... We represented each of the marginals p yi as a piecewise linear function (i.e., a linear spline), using 10 sampling points per unit interval.