Non-Asymptotic Uncertainty Quantification in High-Dimensional Learning
Authors: Frederik Hoppe, Claudio Mayrink Verdun, Hannah Laus, Felix Krahmer, Holger Rauhut
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of our non-asymptotic confidence intervals through extensive numerical experiments across two settings: (i.) the classical debiased LASSO framework to contrast our nonasymptotic confidence intervals against the asymptotic ones. (ii.) the learned framework where we employ learned estimators, specifically the U-net [73] as well as the It-Net [18], to reconstruct real-world MR images and quantify uncertainty. |
| Researcher Affiliation | Academia | Frederik Hoppe RWTH Aachen University hoppe@mathc.rwth-aachen.de Claudio Mayrink Verdun Harvard University claudioverdun@seas.harvard.edu Hannah Laus TU Munich & MCML hannah.laus@tum.de Felix Krahmer TU Munich & MCML felix.krahmer@tum.de Holger Rauhut LMU Munich & MCML rauhut@math.lmu.de |
| Pseudocode | Yes | Algorithm 1 Estimation of Confidence Radius |
| Open Source Code | Yes | 1The code for our findings is available on Git Hub : https://github.com/frederikhoppe/UQ_high_ dim_learning |
| Open Datasets | Yes | We extend the debiasing approach to model-based deep learning for MRI reconstruction using the U-Net and It-Net on single-coil knee images from the NYU fast MRI dataset 2 [74, 75]. 2We obtained the data, which we used for conducting the experiments in this paper from the NYU fast MRI Initiative database (fastmri.med.nyu.edu) [74, 75]. |
| Dataset Splits | Yes | The data is split into training (33370 slices), validation (5346 slices), estimation (1372 slices), and test (100 slices) datasets. |
| Hardware Specification | Yes | The experiments were conducted using Pytorch 1.9 on a desktop with AMD EPYC 7F52 16-Core CPUs and NVIDIA A100 PCIe030 030 40GB GPUs. |
| Software Dependencies | Yes | The experiments were conducted using Pytorch 1.9 on a desktop with AMD EPYC 7F52 16-Core CPUs and NVIDIA A100 PCIe030 030 40GB GPUs. |
| Experiment Setup | Yes | We then train an It-Net [18] with 8 layers, a combination of MS-SSIM [76] and ℓ1-losses and Adam optimizer with learning rate 5e 5 for 15 epochs to obtain our reconstruction function ˆX. (...) The It-Nets were trained for 15 epochs, and the U-Nets were trained for 20 epochs, both with batch size 40. (...) All It-Net and U-Nets are trained with a combination of the MS-SSIM-loss [76], the ℓ1-loss and the Adam optimizer with a learning rate of 5e 5, epsilon of 1e 4 and weight decay parameter 1e 5. |