Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks

Authors: Reinhard Heckel, Paul Hand

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we demonstrate that the deep decoder, an untrained, non-convolutional neural network, defined in the next section, enables concise representation of an image on par with state of the art wavelet thresholding. We draw 100 images from the Image Net validation set uniformly at random and crop the center to obtain a 512x512 pixel color image. For each image x , we fit a deep decoder model G(C) by minimizing the loss L(C) = G(C) x 2 2 with respect to the network parameters C using the Adam optimizer. We then compute for each image the corresponding peak-signal-to-noise ratio, defined as 10 log10(1/MSE). The results in Fig. 2 and Table 1 demonstrate that the deep decoder has denoising performance onpar with state of the art untrained denoising methods.
Researcher Affiliation Academia Reinhard Heckel Department of Electrical and Computer Engineering Rice University rh43@rice.edu Paul Hand Department of Mathematics and College of Computer and Information Science Northeastern University p.hand@northeastern.edu
Pseudocode No The paper describes the network architecture and operations but does not include any pseudocode or algorithm blocks.
Open Source Code Yes Code to reproduce the results is available at https://github.com/reinhardh/ supplement_deep_decoder
Open Datasets Yes We draw 100 images from the Image Net validation set uniformly at random and crop the center to obtain a 512x512 pixel color image.
Dataset Splits No The paper states, 'We draw 100 images from the Image Net validation set uniformly at random', but does not specify a fixed split for reproducibility (e.g., a random seed or specific list of images) nor a separate validation set in the context of a traditional machine learning training process, as the deep decoder is an untrained model.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types).
Software Dependencies No The paper mentions using the Adam optimizer and Re LU activation, but it does not specify version numbers for any software libraries or frameworks (e.g., PyTorch, TensorFlow, scikit-learn).
Experiment Setup Yes We choose the number of parameters of the deep decoder, N, such that it is a small fraction of the output dimension of the deep decoder. ... we fit a deep decoder model G(C) by minimizing the loss L(C) = G(C) x 2 2 with respect to the network parameters C using the Adam optimizer. Throughout, our default architecture is a d = 6 layer network with ki = k for all i, and we focus on output images of dimensions nd = 512 512 and number of channels kout = 3. Note that the number of parameters is given by N = Pd i=1(kiki+1 + 2ki) + koutkd where the term 2ki corresponds to the two free parameters associated with the channel normalization. We use the Adam optimizer for minimizing the loss.