Global Optimality for Non-linear Constrained Restoration Problems via Invexity

Authors: Samuel Pinilla, Jeyan Thiyagalingam

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Moreover, our numerical tests show that the proposed family of invex/quasi-invex functions overcome the performance of convex mappings in terms of reconstruction quality across several relevant metrics and imaging tasks. ... We conduct an effective evaluation of the proposed approach in handling a number of signal restoration problems against state-of-the-art algorithms and baselines.
Researcher Affiliation Academia Samuel Pinilla, & Jeyan Thiyagalingam Scientific Computing Department, Science and Technology Facilities Council samuel.pinilla@stfc.ac.uk Rutherford Appleton Laboratory, Harwell, UK.
Pseudocode No The paper presents algorithmic steps for ADMM and APGM using mathematical equations and descriptions, but does not include formal pseudocode blocks or labeled algorithm boxes.
Open Source Code No The paper provides links to third-party libraries (Cupy, Pytorch) used for implementation, but no direct link to the authors' own open-source code for the described methodology or an explicit statement of its release.
Open Datasets Yes The image dataset is from (Arad et al., 2022), which contains 1,000 RGB-spectral pairs. ... Kodak image dataset. http://r0k.us/graphics/kodak/. Accessed: 2023-08-08.
Dataset Splits Yes The validation set consists of 409 images and 330 slices for the test set. ... This dataset is split into the train, valid, and test subsets in the ratio of 18:1:1.
Hardware Specification No The paper mentions using 'GPU' for computing proximal solutions ('we use the Python library Cu Py (Lib). The reason is that this library allows the creation of GPU kernels...'), but does not specify the exact model or type of GPU, CPU, or other hardware used for the experiments.
Software Dependencies No The paper mentions using Python libraries like CuPy and PyTorch, but does not provide specific version numbers for these or other software dependencies required to replicate the experiment.
Experiment Setup Yes To train FISTA-Net... we use a batch size of 64, optimized using Adam, and with seven hidden layers. ... To train MST++... we use a batch size of 20, optimized using Adam with parameters β1 = 0.9, and β2 = 0.999, and the cosine Annealing scheme is adopted for 300 epochs. ... The number of iterations T of the APGM is fixed to T = 1000. The positive constants α1, α2 of APGM, and c, α, p of equations (7),(10) are chosen to be the best for each analyzed combination of functions by cross-validation.