Experimental design for MRI by greedy policy search
Authors: Tim Bakker, Herke van Hoof, Max Welling
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Unexpectedly, our experiments show that a simple greedy approximation of the objective leads to solutions nearly on-par with the more general non-greedy approach. We offer a partial explanation for this phenomenon rooted in greater variance in the non-greedy objective’s gradient estimates, and experimentally verify that this variance hampers non-greedy models in adapting their policies to individual MR images. We empirically show that this adaptivity is key to improving subsampling designs. |
| Researcher Affiliation | Academia | Tim Bakker, University of Amsterdam, t.b.bakker@uva.nl Herke van Hoof, University of Amsterdam h.c.vanhoof@uva.nl Max Welling, University of Amsterdam, CIFAR m.welling@uva.nl |
| Pseudocode | No | The paper describes procedures but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at: https://github.com/Timsey/pg_mri. |
| Open Datasets | Yes | Datasets: We leverage the NYU fast MRI open database containing a large number of knee and brain volumes for our experiments [47] |
| Dataset Splits | Yes | This leads to a dataset of 6959 train slices, 1779 validation slices, and 1715 test slices. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions software like the U-Net baseline from the fast MRI repository and Weights&Biases tracking software, but does not provide specific version numbers for any key software components or libraries. |
| Experiment Setup | Yes | Models are trained for 50 epochs using a batch size of 16. |