Adaptive Sampling of k-Space in Magnetic Resonance for Rapid Pathology Prediction
Authors: Chen-Yu Yen, Raghav Singhal, Umang Sharma, Rajesh Ranganath, Sumit Chopra, Lerrel Pinto
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To understand the performance of ASMR, we run experiments on the FASTMRI dataset (Mathieu et al., 2013; Zhao et al., 2021), consisting of volumetric brain, knee and abdominal scans. Our experiments reveal the following findings: 1. ASMR outperforms state-of-the-art non-adaptive sampling patterns such as EMRT (Singhal et al., 2023) on 6 out of 8 classification tasks. Compared to learned probabilistic sampling patterns like LOUPE (Bahadir et al., 2019) and DPS, ASMR achieves an improvement of atleast 2.5% in the AUC metrics on 7 out of 8 tasks (See Figure 5). |
| Researcher Affiliation | Academia | Chen-Yu Yen * 1 Raghav Singhal * 1 Umang Sharma 1 Rajesh Ranganath 1 Sumit Chopra 1 Lerrel Pinto 1 1New York University. Correspondence to: Chen-Yu Yen <chenyu.yen@nyu.edu>, Raghav Singhal <rsinghal@nyu.edu>. |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code can be found at adaptive-sampling-mr.github.io. |
| Open Datasets | Yes | To understand the performance of ASMR, we run experiments on the FASTMRI dataset (Mathieu et al., 2013; Zhao et al., 2021), consisting of volumetric brain, knee and abdominal scans. Knee scans: For the knee dataset, we use the FASTMRI knee dataset (Zbontar et al., 2018) with slice-level labels provided by Zhao et al. (2021). |
| Dataset Splits | Yes | Slice level splits along with positivity rates are provided in Table 1. Table 1. Number of slices in the training, validation and test splits for each task. The number in the bracket is the percentage of slices with a pathology. (Table provides counts for Train, Validation, Test slices for various datasets). |
| Hardware Specification | No | The computational requirements for this work were supported in part by the resources and personnel of the NYU Langone High Performance Computing (HPC) Core. The paper mentions a High Performance Computing (HPC) Core but does not specify any particular hardware components like CPU/GPU models, memory, or specific configurations. |
| Software Dependencies | No | For this work, we build our agent on top of an open-source implementation of PPO in (Huang et al., 2022). The paper mentions using PPO and lists hyperparameters but does not provide specific version numbers for any software, libraries, or frameworks used. |
| Experiment Setup | Yes | Hyperparameters used to train the policy are provided in Table 5. Table 5. Hyperparameters of our agent (lists Optimizer Adamw, Learning rate 1e-04, weight decay 1e-04, discount factor 0.99, gae lambda 0.95, clip ratio 0.2, entropy cost 0.01, grad norm clipping 0.5, value function coefficient 0.5, parallelized rollout 128). |