Capturing implicit hierarchical structure in 3D biomedical images with self-supervised hyperbolic representations

Authors: Joy Hsu, Jeffrey Gu, Gong Wu, Wah Chiu, Serena Yeung

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present experiments on both synthetic data and biomedical data to validate our hypothesis. We evaluate our method quantitatively on both synthetic 3D datasets simulating biological image data as well as the real-world Brain Tumor Segmentation (Bra TS) tumor segmentation dataset.
Researcher Affiliation Academia Joy Hsu Department of Computer Science Stanford University joycj@stanford.edu Jeffrey Gu ICME Stanford University jeffgu@stanford.edu Gong Her Wu Department of Bioengineering Stanford University wukon@stanford.edu Wah Chiu Department of Bioengineering Stanford University wahc@stanford.edu Serena Yeung Department of Biomedical Data Science Stanford University syyeung@stanford.edu
Pseudocode No The paper describes the algorithms and framework in text and with diagrams, but does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not contain an explicit statement or link to open-source code for the described methodology.
Open Datasets Yes We evaluate our method quantitatively on both synthetic 3D datasets simulating biological image data as well as the real-world Brain Tumor Segmentation (Bra TS) tumor segmentation dataset. The Bra TS 2019 dataset is a public, well-established benchmark dataset containing 3D MRI scans of brain tumors and voxel-level ground truth annotations of tumor segmentation masks [Menze et al., 2014, Bakas et al., 2017, 2018].
Dataset Splits Yes Our dataset consists of 120 total volumes, which we split into 80 training, 20 validation, and 20 test examples. There are 259 high grade glioma (HGG) labelled training examples, which we split into 180 train, 39 validation, and 40 test examples.
Hardware Specification No The paper does not provide specific hardware specifications (e.g., GPU models, CPU models, memory details) used for running the experiments.
Software Dependencies No The paper mentions
Experiment Setup Yes For all models, the encoder of our variational autoencoder is comprised of four 3D convolutional layers with kernel size 5 of increasing filter depth {16, 32, 64, 128}. The decoder has the same structure, except with decreasing filter depth and a gyroplane convolutional layer as the initial layer. We use β = 1e3 as the weighting factor between LELBO and Ltriplet and α = 0.2 as the triplet margin. In all experiments, we fix the representation dimension to be d = 2, and show latent dimension ablations in the Appendix. We train our model using the Adam optimizer [Kingma and Ba, 2014].