Experimental Design for Multi-Channel Imaging via Task-Driven Feature Selection

Authors: Stefano B. Blumberg, Paddy J. Slator, Daniel C. Alexander

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate the potential of TADRED in diverse imaging applications: several clinically-relevant tasks in magnetic resonance imaging; and remote sensing and physiological applications of hyperspectral imaging. Results show substantial improvement over classical experimental design, two recent application-specific methods within the new paradigm, and state-of-the-art approaches in supervised feature selection.
Researcher Affiliation Academia 1Centre for Artificial Intelligence, Department of Computer Science, University College London 2Centre for Medical Image Computing, Department of Computer Science, University College London 3Cardiff University Brain Research Imaging Centre and School of Computer Science, Cardiff University
Pseudocode Yes Algorithm 1 TADRED Forward & Backward Pass (FBP) in Step t, Algorithm 2 TADRED Optimization
Open Source Code Yes Code is available: Code Link. We provide the code: Code Link, which contains the entire source code for our algorithm TADRED.
Open Datasets Yes Data used in tables 2, 9 are images from five in-vivo human subjects, and are publicly available MUDI Organizers (2022), WU-Minn Human Connectome Project (HCP) diffusion data, which is publicly available at www.humanconnectome.org (Test Retest Data Release, release date: Mar 01, 2017) Essen et al. (2013), This is publicly available Baumgardner et al. (2022).
Dataset Splits Yes For every experiment comparing TADRED with baselines, we split the data into training, validation/development, and test sets. This is described in detail in section F. 90% 10% of the training/validation set voxels are respectively, for training and validation.
Hardware Specification Yes Exploratory analysis and development was conducted on a mid-range (as of 2023) machine with a AMD Ryzen Threadripper 2950X CPU and single Titan V GPU. All experimental results reported in this paper were computed on low-to-mid range (as of 2023) graphical processing units (GPU): GTX 1080 Ti, Titan Xp, Titan X, Titan V, RTX 2080 Ti.
Software Dependencies No The paper describes implementation details and refers to official repositories for baselines but does not list specific version numbers for software libraries or dependencies (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes Other general hyperparameters are: batch size 1500, learning rate 10 4 (10 5 for the experiment in figure 2), ADAM optimizer, and default network weight initialization. The default option for early stopping used 20 epochs for patience (i.e training stops if validation performance does not improve in 20 epochs). We set the numbers of epochs in the four-phase inner loop training procedure as E1 = 25, E2 = E1 +10, E3 = E2 +10.