Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Experimental design for MRI by greedy policy search

Authors: Tim Bakker, Herke van Hoof, Max Welling

NeurIPS 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Unexpectedly, our experiments show that a simple greedy approximation of the objective leads to solutions nearly on-par with the more general non-greedy approach. We offer a partial explanation for this phenomenon rooted in greater variance in the non-greedy objectiveโ€™s gradient estimates, and experimentally verify that this variance hampers non-greedy models in adapting their policies to individual MR images. We empirically show that this adaptivity is key to improving subsampling designs.
Researcher Affiliation Academia Tim Bakker, University of Amsterdam, EMAIL Herke van Hoof, University of Amsterdam EMAIL Max Welling, University of Amsterdam, CIFAR EMAIL
Pseudocode No The paper describes procedures but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at: https://github.com/Timsey/pg_mri.
Open Datasets Yes Datasets: We leverage the NYU fast MRI open database containing a large number of knee and brain volumes for our experiments [47]
Dataset Splits Yes This leads to a dataset of 6959 train slices, 1779 validation slices, and 1715 test slices.
Hardware Specification No The paper does not provide specific details regarding the hardware used for experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions software like the U-Net baseline from the fast MRI repository and Weights&Biases tracking software, but does not provide specific version numbers for any key software components or libraries.
Experiment Setup Yes Models are trained for 50 epochs using a batch size of 16.