CCQ: Cross-Class Query Network for Partially Labeled Organ Segmentation
Authors: Xuyang Liu, Bingbing Wen, Sibei Yang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiment results demonstrate that CCQ outperforms all the state-of-the-art models on the MOTS dataset, which consists of seven organ and tumor segmentation tasks. |
| Researcher Affiliation | Academia | 1 School of Information Science and Technology, Shanghai Tech University 2 Information School, University of Washington 3 Shanghai Engineering Research Center of Intelligent Vision and Imaging liuxy15@shanghaitech.edu.cn, bingbw@uw.edu, yangsb@shanghaitech.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github. com/Yang-007/CCQ.git |
| Open Datasets | Yes | We evaluate the proposed CCQ on the large-scale, partially-labeled MOTS (Zhang et al. 2021) dataset, which consists of seven 3D medical image segmentation datasets of abdominal organs and tumors, including Li TS (Bilic et al. 2019), Ki TS (Heller et al. 2019), Colon (Simpson et al. 2019), Lung (Simpson et al. 2019), Spleen (Simpson et al. 2019), Pancreas (Simpson et al. 2019) and Hepatic Vessel (Simpson et al. 2019) from Medical Segmentation Decathlon (MSD) (Simpson et al. 2019). |
| Dataset Splits | No | For the main experiment, the paper only specifies a training and testing split (920 scans for training, 235 for testing) and does not mention a validation split for these main results. A validation split is only mentioned for the ablation study. |
| Hardware Specification | Yes | All models are trained in a workstation with 4 Tesla V100 GPUs. |
| Software Dependencies | No | The paper mentions using SGD as an optimizer but does not specify software versions for libraries like PyTorch, TensorFlow, or Python itself, which are necessary for reproducibility. |
| Experiment Setup | Yes | The optimizer of stochastic gradient descent (SGD) with a momentum of 0.99 is used to optimize the network. The learning rate is set to 0.01 with 0.9 decay. We randomly obtain the sub-volume with a size of 64 192 192 of every input image in the training stage and use the sliding window with the same size in the prediction stage. |