HNO: High-Order Numerical Architecture for ODE-Inspired Deep Unfolding Networks

Authors: Lin Kong, Wei Sun, Fanhua Shang, Yuanyuan Liu, Hongying Liu7220-7228

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments verify the effectiveness and advantages of our architecture.In this section, we first verify the effectiveness of our HNO architecture by comparing the performance of different DUNs on synthetic data. Then we evaluate our 2NO-LISTA, 2NO-GLISTA and 2NO-LISTA-CS for natural image inpainting and image CS tasks, respectively.
Researcher Affiliation Academia 1Key Lab. of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University; 2Peng Cheng Laboratory
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide an explicit statement or a link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We use BSD500 as the training set, Set 11 as the test set, and randomly extract 100,000 and 5,000 8 8 patches from the images in the BSD500 training set and validation set, respectively, for training. We also adopt the same BSD500 for the training set, but a different Set 11 as the test set
Dataset Splits Yes We use BSD500 as the training set, Set 11 as the test set, and randomly extract 100,000 and 5,000 8 8 patches from the images in the BSD500 training set and validation set, respectively, for training. randomly extract 30,000 and 5,000 patches with size 32 32 from the images in the BSD500 training set and validation set, respectively, for training.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with versions) needed to replicate the experiment.
Experiment Setup Yes All experimental settings and all training follow the previous work (Chen et al. 2018b; Zhang and Ghanem 2018; Wu et al. 2020; Aberdam, Golts, and Elad 2020). For the learnable parameters, αt, βt, θt and W are initialized as 1.0, -0.5, λ/LΦ respectively.