Active Deep Probabilistic Subsampling
Authors: Hans Van Gorp, Iris Huijben, Bastiaan S Veeling, Nicola Pezzotti, Ruud J. G. Van Sloun
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate that A-DPS improves over DPS for MNIST classifcation at high subsampling rates. Moreover, we demonstrate strong performance in active acquisition Magnetic Resonance Image (MRI) reconstruction, outperforming DPS and other deep learning methods. |
| Researcher Affiliation | Collaboration | 1Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands 2Department of Com puter Science, University of Amsterdam, Amsterdam, The Nether lands 3Department of Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands 4Philips Research, Eindhoven, The Netherlands. |
| Pseudocode | Yes | Algorithm 1 A-DPS Input: acquisition steps I Data: input signal x and associated task s c0 , d, l, i = 0 while i < I do φi = gκ(ci) d += DPS(φi) i y = d x i i+1 sˆ , c = fθ(yi) i l += L(ˆs , s) i += 1 end |
| Open Source Code | Yes | Our code is publicly available.1 1https://github.com/Iam Huijben/Deep-Probabilistic Subsampling |
| Open Datasets | Yes | MNIST database (Le Cun et al., 1998), consisting of 70,000 grayscale images of 28 28 pixels of handwritten digits between 0 and 9. We split the original 60,000 training images into 50,000 training and 10,000 validation images. We keep the original 10,000 testing examples. We train both DPS top-M and A-DPS to take partial measurements in the pixel-domain at different sampling rates. ... We make use of the NYU fast MRI database of knee MRI vol umes (Zbontar et al., 2018). Only the single-coil measure ments were selected, from which the outer slices were re moved. The resulting data was split into 8,000 training, 2,000 validation, and 3,000 testing MRI slices. |
| Dataset Splits | Yes | We split the original 60,000 training images into 50,000 training and 10,000 validation images. We keep the original 10,000 testing examples. ... The resulting data was split into 8,000 training, 2,000 validation, and 3,000 testing MRI slices. |
| Hardware Specification | No | No specific hardware details (e.g., GPU models, CPU models, memory) used for running experiments are provided in the paper. |
| Software Dependencies | No | The paper mentions 'Adam solver (Kingma & Ba, 2015)' for optimization but does not provide specific software versions for libraries like PyTorch, TensorFlow, or Python itself, which would be necessary for full reproducibility. |
| Experiment Setup | Yes | We employ SGD with the Adam solver (lr = 2e 4, β1 = 0.9, β2 = 0.999, and ϵ = 1e 7) to minimize the loss function. Training was performed on batches of 256 examples for 100 epochs. ... The temperature parameter was fxed to 2. |