Image-to-Image Regression with Distribution-Free Uncertainty Quantification and Applications in Imaging
Authors: Anastasios N Angelopoulos, Amit Pal Kohli, Stephen Bates, Michael Jordan, Jitendra Malik, Thayer Alshaabi, Srigokul Upadhyayula, Yaniv Romano
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our procedure on three image-to-image regression tasks: quantitative phase microscopy, accelerated magnetic resonance imaging, and super-resolution transmission electron microscopy of a Drosophila melanogaster brain, and provide accompanying open source code. [...] The following sequence of experiments applies our methods to several challenging settings in biological imaging. |
| Researcher Affiliation | Collaboration | 1Department of Electrical Engineering and Computer Science, University of California, Berkeley 2Advanced Bioimaging Center, Department of Molecular and Cell Biology, University of California, Berkeley 3Chan Zuckerberg Biohub, San Francisco, CA 4Departments of Electrical and Computer Engineering and of Computer Science, Technion Israel Institute of Technology. |
| Pseudocode | Yes | Algorithm 1 Summarizes this process. [...] Algorithm 2 Pseudocode for computing ˆλ |
| Open Source Code | Yes | Our accompanying codebase allows for easy application of these methods to any imaging problem, and the exact reproduction of the aforementioned examples. See the code at this Github link: https://github.com/aangelopoulos/im2im-uq. |
| Open Datasets | Yes | We use the Berkeley Single Cell Computational Microscopy (BSCCM) dataset (Pinkard, 2021) [...] We use the Fast MRI dataset for this example (Zbontar et al., 2018b) [...] We use the Janelia Transmission Electron Microscopy Camera Array (TEMCA2) dataset of the Full Adult Fly Brain (Zheng et al., 2018). |
| Dataset Splits | Yes | We use 1800 randomly selected data points with a batch size of 64 to train the model, 100 points for calibration, and 100 points for validation. (Section 3.3) [...] 27,993 randomly selected 320x320 pixel coronal knee slices for training the model, 3,474 for the RCPS calibration, and 3,474 for validation. (Section 3.4) [...] roughly 2M images of size 320x320 for training, 25K images for calibration, and 25K images for validation. (Section 3.5) |
| Hardware Specification | No | The paper does not specify the hardware used for running experiments, such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions using a 'U-Net' architecture and 'Adam optimizer' but does not provide specific version numbers for any software dependencies like programming languages (e.g., Python), deep learning frameworks (e.g., PyTorch, TensorFlow), or other libraries. |
| Experiment Setup | Yes | In all experiments, we fit the predictor ˆf and the heuristic notions of uncertainty u and l jointly. [...] an 8-layer U-Net (Ronneberger et al., 2015) is used as the base model architecture and trained with an Adam optimizer for 10 epochs. We swept over two learning rates, {0.001, 0.0001}, and chose the learning rate that minimized the point prediction s MSE for each method in each experiment. All images get normalized to the interval [0, 1]. For the softmax heuristic, we discretized the prediction space with K = 50... We choose α = δ = 0.1 for the RCPS procedure in all cases, and adaptively select a grid of 1000 values of λ for each experiment. |