Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

OADAT: Experimental and Synthetic Clinical Optoacoustic Data for Standardized Image Processing

Authors: Firat Ozdemir, Berkan Lafci, Xose Luis Dean-Ben, Daniel Razansky, Fernando Perez-Cruz

TMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here, we provide experimental data and simulations of forearm datasets as well as benchmark networks aiming at facilitating the development of new image processing algorithms and benchmarking. These Experimental and Synthetic Clinical Optoacoustic Data (OADAT) include, (i) large and varied clinical and simulated forearm datasets with paired subsampled or limited view image reconstruction counterparts, (ii) raw signal acquisition data of each such image reconstruction, (iii) definition of 44 experiments with gold standards focusing on the aforementioned OA challenges, (iv) pretrained model weights of the networks used for each task, and (v) user-friendly scripts to load and evaluate the networks on our datasets.
Researcher Affiliation Academia 1Swiss Data Science Center, ETH Zurich and EPFL, Zurich, Switzerland 2Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland 3Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland 4Institute for Machine Learning, Department of Computer Science, ETH Zurich, Switzerland
Pseudocode No The paper describes methods and model architecture (e.g., modified UNet) but does not provide any structured pseudocode or algorithm blocks in the main text. Details are referred to appendices not provided in the input text.
Open Source Code Yes Pretrained model weights and various scripts to train and evaluate mod UNet are available at https://renkulab.io/gitlab/firat.ozdemir/oadat-evaluate.
Open Datasets Yes 2Link to our datasets: hdl.handle.net/20.500.11850/551512 3Repository for accessing and reading datasets: github.com/berkanlafci/oadat
Dataset Splits Yes Out of the nine volunteers in MSFD, we use five for training (IDs: 2, 5, 6, 7, 9), one for validation (ID: 10) and three for testing (IDs: 11, 14, 15). Out of the 14 volunteers in SWFD, we use eight for training (IDs: 1, 2, 3, 4, 5, 6, 7, 8), one for validation (ID: 9) and five for testing (IDs: 10, 11, 12, 13, 14). Out of the 20k slices in SCD, we use the first 14k for training, following 1k for validation, and the last 5k for testing. For each experiment conducted on OADAT-mini, we use the first 75 samples for training, next 5 for validation and the last 20 for quantitative evaluation.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for the experiments.
Software Dependencies No The paper mentions 'Python module for OA reconstruction' and 'Python module for acoustic map simulation' and that the architecture is 'based on UNet', but it does not specify any software names with version numbers (e.g., Python 3.8, PyTorch 1.9, TensorFlow 2.x).
Experiment Setup No The paper describes the 'mod UNet' architecture with some design choices like 'attention gates', 'residual convolutional blocks with batch normalization', '2D bilinear upsampling', and starting with '32 convolutional filters'. However, it lacks specific hyperparameters for training, such as learning rate, batch size, number of epochs, optimizer type, or loss function in the main text. It states 'Full schematic as well as other implementation details are discussed in Appendix C', which is not provided.