Bidirectional Recurrence for Cardiac Motion Tracking with Gaussian Process Latent Coding

Authors: Jiewen Yang, Yiqun Lin, Bin Pu, Xiaomeng Li

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate the performance of GPTrack in cardiac motion tracking, we conduct experiments based on 3D Echocardiogram videos [17, 18] and 4D temporal MRI image [19]. Results in Tables 1, 2 and 3, show the GPTrack enhance the accuracy of motion tracking performance in a clear margin, without substantially increasing the computational cost in comparison to other state-of-the-art methods.
Researcher Affiliation Academia Jiewen Yang Yiqun Lin Bin Pu Xiaomeng Li B The Hong Kong University of Science and Technology {jyangcu, ylindw}@connect.ust.hk {eebinpu, eexmli}@ust.hk
Pseudocode No The paper describes its methodology using mathematical formulations and architectural diagrams (Figures 2 and 3), but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at: https://github.com/xmed-lab/GPTrack.
Open Datasets Yes Cardiac UDA [17]. The Cardiac UDA dataset collected from two medical centers consists of 314 echocardiogram videos from patients. CAMUS [18]. The CAMUS dataset provides pixel-level annotations for the left ventricle, myocardium, and left atrium in the Apical two-chamber view... ACDC [19]. The ACDC dataset consists of 100 4D temporal cardiac MRI cases.
Dataset Splits Yes In Cardiac UDA, we split the dataset into 8 : 2 for training and validation. During testing, we reported results in 10 fully annotated videos. In the CAMUS [18] dataset, videos without annotation are used for only training, while we randomly split the remaining 450 annotated videos into 300/50/100 for training, validation and testing.
Hardware Specification Yes For all experiments, We use Intel(R) Xeon(R) Platinum 8375C with 1 RTX3090 for both training and inference.
Software Dependencies No We trained the model using the Adam optimizer with betas equal to 0.9 and 0.99.
Experiment Setup Yes We trained the model using the Adam optimizer with betas equal to 0.9 and 0.99. The training batch size of the model was set to 1. We trained for a total of 1000 epochs with an initial learning rate of 5e 4 and decay by a factor of 0.5 in every 50 epochs. During training, for Cardiac UDA [17] and CAMUS [18], we resized each frame to 384 384 and then randomly cropped them to 256 256. All frames were normalized to [0,1] during training. In temporal augmentation of datasets [17, 18], we randomly selected 32 frames from an echocardiogram video with a sampling ratio of either 1 or 2. For ACDC [19], we resampled all scans with a voxel spacing of 1.5 1.5 3.15mm and cropped them to 128 128 32, normalized the intensity of all images to [-1, 1]. For spatial data augmentation of all datasets, we randomly applied flipping, rotation and Gaussian blurring.