Lossy Image Compression with Compressive Autoencoders
Authors: Lucas Theis, Wenzhe Shi, Andrew Cunningham, Ferenc Huszár
ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compared our method to JPEG (Wallace, 1991), JPEG 2000 (Skodras et al., 2001), and the RNN-based method of (Toderici et al., 2016b)4. We evaluated the different methods in terms of PSNR, SSIM (Wang et al., 2004a), and multiscale SSIM (MS-SSIM; Wang et al., 2004b). |
| Researcher Affiliation | Industry | Lucas Theis, Wenzhe Shi, Andrew Cunningham& Ferenc Husz ar Twitter London, UK {ltheis,wshi,acunningham,fhuszar}@twitter.com |
| Pseudocode | No | The paper describes the architecture and method in text and diagrams, but does not include pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a statement or link to their own open-source code for the methodology described. It only references third-party code used or adapted. |
| Open Datasets | Yes | For testing, we use the commonly used Kodak Photo CD dataset of 24 uncompressed 768 512 pixel images3. 3http://r0k.us/graphics/kodak/ |
| Dataset Splits | No | Hyperparameters affecting network architecture and training were evaluated on a small set of held-out Flickr images. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments, only general statements about the computational efficiency of their network. |
| Software Dependencies | Yes | All networks were implemented in Python using Theano (2016) and Lasagne (Dieleman et al., 2015). |
| Experiment Setup | Yes | All models were trained using Adam (Kingma & Ba, 2015) applied to batches of 32 images 128 128 pixels in size. ... the learning rate is reduced from an initial value of 10 4 to 10 5. Training was performed for up to 106 updates... Here we used an initial learning rate of 10 3 and continuously decreased it by a factor of τ κ/(τ + t)κ, where t is the current number of updates performed, κ = .8, and τ = 1000. Scales were optimized for 10,000 iterations. |