An inner-loop free solution to inverse problems using deep neural networks
Authors: Kai Fan, Qi Wei, Lawrence Carin, Katherine A. Heller
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both synthetic data and real datasets demonstrate the efficiency and accuracy of the proposed method compared with the conventional ADMM solution using inner loops for solving inverse problems. |
| Researcher Affiliation | Academia | Kai Fai Duke University kai.fan@stat.duke.edu Qi Wei Duke University qi.wei@duke.edu Lawrence Carin Duke University lcarin@duke.edu Katherine Heller Duke University kheller@stat.duke.edu |
| Pseudocode | Yes | Algorithm 1 Inner-loop free ADMM with Auxiliary Deep Neural Nets (Inf-ADMM-ADNN) Training stage: 1: Train net Kφ for inverting AT A + βI 2: Train net c PSDAE for proximity operator of R(x; y) Testing stage: 1: for t = 1, 2, . . . do 2: Update x cf. xk+1 = F 1(v); 3: Update z cf. (10); 4: Update u cf. (5); 5: end for |
| Open Source Code | No | The paper does not include an unambiguous statement that the authors are releasing the source code for their methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | We have tested our algorithm on the MNIST dataset [14] and the 11K images of the Caltech-UCSD Birds-200-2011 (CUB-200-2011) dataset [28]. In the first two rows of Fig. 3, high resolution images, as shown in the last column, have been blurred (convolved) using a Gaussian kernel of size 3 × 3 and downsampled every 4 pixels in both vertical and horizontal directions to generate the corresponding low resolution images as shown in the first column. ... on the celeb A-dataset [16]. |
| Dataset Splits | No | The paper mentions a '20% held-out test set' but does not explicitly state the use of a separate validation set, nor does it provide detailed train/validation/test splits or cross-validation information. |
| Hardware Specification | No | The paper mentions 'NVIDIA for the GPU donations' in the acknowledgments, indicating that GPUs were used. However, it does not specify any particular GPU model, CPU, memory, or other detailed hardware specifications for the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software components, libraries, or programming languages used in the experiments. |
| Experiment Setup | Yes | It is interesting to note that when β is large, e.g., 0.1 or 0.01, the NMSE of ADMM updates converges to a stable value rapidly in a few iterations (less than 10). Reducing the value of β slows down the decay of NMSE over iterations but reaches a lower stable value. When the value of β is small enough, e.g., β = 0.0001, 0.0005, 0.001, the NMSE converges to the identical value. |