Spatio-Angular Convolutions for Super-resolution in Diffusion MRI
Authors: Matthew Lyon, Paul Armitage, Mauricio A Álvarez
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the PCCNN performs competitively while using significantly fewer parameters. Moreover, we show that this formulation generalises well to clinically relevant downstream analyses such as fixel-based analysis, and neurite orientation dispersion and density imaging. ... 4 Experiments and Results |
| Researcher Affiliation | Academia | Matthew Lyon University of Manchester matthew.s.lyon@.manchester.ac.uk Paul Armitage University of Sheffield p.armitage@sheffield.ac.uk Mauricio A Álvarez University of Manchester mauricio.alvarezlopez@manchester.ac.uk |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code for this work is available at github.com/m-lyon/dmri-pcconv. |
| Open Datasets | Yes | d MRI data from the WU-Minn Human Connectome Project (HCP) [38] were used for training, validation and testing. |
| Dataset Splits | Yes | d MRI data from the WU-Minn Human Connectome Project (HCP) [38] were used for training, validation and testing. ... Models were trained on twenty-seven subjects from the HCP dataset, while three subjects were used for validation during development. |
| Hardware Specification | Yes | Models were trained using 4 NVIDIA A100 s with a batch size of 16, and an ℓ1 loss function, for 200,000 iterations using Adam W [23]. |
| Software Dependencies | No | The paper mentions software like 'Ray Tune [20]', 'MRtrix3 [37]', and 'cu DIMOT [17]' but does not provide specific version numbers for them. |
| Experiment Setup | Yes | Models were trained using 4 NVIDIA A100 s with a batch size of 16, and an ℓ1 loss function, for 200,000 iterations using Adam W [23]. ... Hyperparameters for the PCCNN were selected through a random grid search with Ray Tune [20]. ... Each PCConv layer or residual PCConv block is followed by a rectified linear unit (Re LU), excluding the final layer. ... Each hypernetwork is composed of two dense layers, each followed by a leaky Re LU with a negative slope of 0.1, and a final dense layer with output size 1 and no subsequent activation. ... The input angular dimension size was determined via qin U(qsample), qsample = {6, 7, ..., 19, 20}. |