A Bridging Framework for Model Optimization and Deep Propagation

Authors: Risheng Liu, Shichao Cheng, xiaokun liu, Long Ma, Xin Fan, Zhongxuan Luo

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments verify our theoretical results and demonstrate the superiority of PODM against these state-of-the-art approaches.
Researcher Affiliation Academia Risheng Liu1,2 , Shichao Cheng3, Xiaokun Liu1, Long Ma1, Xin Fan1,2, Zhongxuan Luo2,3 1International School of Information Science & Engineering, Dalian University of Technology 2Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province 3School of Mathematical Science, Dalian University of Technology
Pseudocode No The paper describes the iterative processes and model components but does not provide structured pseudocode or an algorithm block.
Open Source Code No The paper does not provide any explicit statement or link for the availability of open-source code for the methodology described.
Open Datasets Yes We plotted the iteration behaviors of PODM on example images from the commonly used Set5 super-resolution benchmark [4] and compared it with the most popular numerical solvers (e.g., FISTA [3]) and the recently proposed representative network based iteration methods (e.g., IRCNN [33]). We first conducted experiments on the most widely used Levin et al. benchmark [15], with 32 blurry images of size 255 255. We also evaluated all these compared methods on the more challenging Sun et al. benchmark [28], which includes 640 blurry images with 1% Gaussian noises, sizes range from 620 1024 to 928 1024.
Dataset Splits No The paper mentions using common benchmark datasets but does not explicitly state the training, validation, or test data splits, such as percentages, sample counts, or specific splitting methodologies.
Hardware Specification Yes All the experiments are conducted on a PC with Intel Core i7 CPU @ 3.6 GHz, 32 GB RAM and an NVIDIA Ge Force GTX 1060 GPU.
Software Dependencies No The paper mentions general architectural components like 'CNN architecture' and 'multilayer perceptron' and references 'ELU [6] activations', but does not provide specific software names with version numbers for reproducibility.
Experiment Setup No The paper describes the network architecture (e.g., '6 convolutional layers', 'ELU activations') and some settings like 'H = µI/2 with µ = 1e 2' for PODM, but does not provide specific hyperparameter values like learning rates, batch sizes, or optimizer settings needed for full experimental reproducibility.