Rethinking the CSC Model for Natural Images

Authors: Dror Simon, Michael Elad

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of the models using the BSD68 dataset that was excluded from the training set. Additional experiments and information can be found in the supplementary material and on https://github.com/drorsimon/CSCNet. Table 2 presents the results of our models compared to other leading methods, and Figure 3 shows some of the learned filters, taken from C.
Researcher Affiliation Academia Dror Simon Department of Computer Science Technion, Israel dror.simon@cs.technion.ac.il Michael Elad Department of Computer Science Technion, Israel elad@cs.technion.ac.il
Pseudocode No No, the paper describes iterative processes and mathematical formulations for algorithms like ISTA, but it does not contain any structured pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Additional experiments and information can be found in the supplementary material and on https://github.com/drorsimon/CSCNet.
Open Datasets Yes The clean images are taken from the Waterloo Exploration Dataset [46] and 432 images from BSD [47].
Dataset Splits No No, the paper mentions preparing a training set and evaluating on the BSD68 dataset (which was excluded from the training set), but it does not provide specific details on training, validation, and test dataset splits (e.g., percentages, sample counts, or explicit validation set usage).
Hardware Specification No No, the paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No No, the paper mentions using the ADAM optimizer but does not specify any software libraries, frameworks, or their version numbers (e.g., 'Python', 'TensorFlow', 'PyTorch' with versions) that were used to implement and run the experiments.
Experiment Setup Yes To train the proposed model... In each iteration, a random patch of size 128 is cropped from an image and a random realization of noise is sampled. We train 4 models, one for each noise level {15, 25, 50, 75}. For each model we learn 175 filters of size 11 11, use a stride q = 8 and set L = 12. To learn the parameters of the model, we employ the ADAM optimizer [48] and minimize the ℓ2 loss... We use a learning rate of 10 4 and decrease it by a factor of 0.7 every 50 epochs and iterate over 250 epochs. To avoid divergence, we set the ϵ parameter of the optimizer to 10 3.