Improved Imaging by Invex Regularizers with Global Optima Guarantees

Authors: Samuel Pinilla, Tingting Mu, Neil Bourne, Jeyan Thiyagalingam

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the effectiveness of invex regularization, numerical experiments are conducted for various imaging tasks using benchmark datasets.
Researcher Affiliation Academia Samuel Pinilla1,2, Tingting Mu3, Neil Bourne2, Jeyan Thiyagalingam1 1 Scientific Computing Department, Science and Technology Facilities Council, Harwell, UK 2University of Manchester at Harwell, UK 3Computer Science, University of Manchester, UK {samuel.pinilla,t.jeyan}@stfc.ac.uk {tingting.mu,neil.bourne}@manchester.ac.uk
Pseudocode Yes Algorithm 1 Accelerated Proximal Gradient
Open Source Code No No, the paper mentions using implementations from other works (e.g., "We used Noise2Void implementation at https://github.com/juglab/n2v" and "We used the implementation from [86] at https://github.com/VITA-Group/LISTA-CPSS") but does not provide concrete access to source code for the methodology specifically developed in this paper.
Open Datasets Yes A number of datasets have been merged to formulate one unique dataset for our training and evaluation purposes. These are DIV2K super-resolution [88], the Mc Master [89], Kodak [90], Berkeley Segmentation (BSDS 500) [91], Tampere Images (TID2013) [92] and the Color BSD68 [93] datasets.
Dataset Splits Yes When neural network training is involved, we take a total of 900 images which are randomly divided into a training set of 800 images, a validation set of 55 images, and a test set of 45 images.
Hardware Specification No No, the paper does not provide specific hardware details (like exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No No, the paper mentions software like "Adam optimization algorithm", "Noise2Void", but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For Algorithm 1 and its plug-and-play variant, the parameters λ, α1, and α2 were chosen to be the best for each analyzed function determined by cross validation, and the initial point x(0) was the blurred image b. For the learning of Recon Net, we extract patches of size 33 × 33 from the noisy blurred training image set, and we train it using the Adam optimization algorithm and a learning rate 5 × 10−4 for 512 epochs with a batch size of 128.