Learning the optimal Tikhonov regularizer for inverse problems
Authors: Giovanni S. Alberti, Ernesto De Vito, Matti Lassas, Luca Ratti, Matteo Santacesaria
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The results are validated through numerical simulations. |
| Researcher Affiliation | Academia | Giovanni S. Alberti Ma LGa Center, Department of Mathematics University of Genoa, Italy giovanni.alberti@unige.it Ernesto De Vito Ma LGa Center, Department of Mathematics University of Genoa, Italy ernesto.devito@unige.it Matti Lassas Department of Mathematics and Statistics University of Helsinki, Finland matti.lassas@helsinki.fi Luca Ratti Ma LGa Center, Department of Mathematics University of Genoa, Italy luca.ratti@unige.it Matteo Santacesaria Ma LGa Center, Department of Mathematics University of Genoa, Italy matteo.santacesaria@unige.it |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | All the codes are available at https://github.com/Learn Tikhonov/Code |
| Open Datasets | No | The paper describes how synthetic data is generated based on statistical models ('We define a statistical model both for ε and for x, which we use for the generation of the training data.'), but it does not provide a direct link, DOI, or specific repository name for accessing a pre-existing, publicly available dataset that was used. The data is generated for the experiments. |
| Dataset Splits | No | The paper does not provide specific percentages or sample counts for train/validation/test splits. It describes generating samples for different 'sample size m' for training, but not a fixed split from a dataset. The paper states in Section 5.2: 'We compute the mean squared errors L(θ ), L(bθS) and L(bθU) according to the definition of L, thus avoiding the use of a test set.' |
| Hardware Specification | Yes | All computations were implemented with Matlab R2019a, running on a laptop with 16GB of RAM and 2.2 GHz Intel Core i7 CPU. |
| Software Dependencies | Yes | All computations were implemented with Matlab R2019a... |
| Experiment Setup | Yes | We consider a noise level of 5%, namely, the standard deviation σ is set to the 5% of the peak value of the average signal. In different tests, we employ different white noise processes with different distributions, including the Gaussian (cfr. Example 2.7) and the uniform distributions. ... In order to discretize the described problem, we fix N > 0 and approximate the space X by means of the N dimensional space generated by a 1D-pixel basis. ... The sample size ranges between 3 103 and 3 105. ... Fix the discretization size N, define the optimal regularizer Rθ . ... We repeat the same experiment 30 times, with different training samples for each size m and taking the average in each repetition. ... N = 64 and N = 256 |