Class-Aware Adversarial Transformers for Medical Image Segmentation
Authors: Chenyu You, Ruihan Zhao, Fenglin Liu, Siyuan Dong, Sandeep Chinchali, Ufuk Topcu, Lawrence Staib, James Duncan
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments demonstrate that CASTformer dramatically outperforms previous state-of-the-art transformer-based approaches on three benchmarks, obtaining 2.54%-5.88% absolute improvements in Dice over previous models. |
| Researcher Affiliation | Academia | Chenyu You1 Ruihan Zhao2 Fenglin Liu3 Siyuan Dong1 Sandeep Chinchali2 Ufuk Topcu2 Lawrence Staib1 James S. Duncan1 1Yale University 2UT Austin 3University of Oxford |
| Pseudocode | No | No pseudocode or algorithm block was found. |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See Sections 4 and supplemental material. |
| Open Datasets | Yes | Datasets. We experiment on multiple challenging benchmark datasets: Synapse1, Li TS, and MP-MRI. More details can be found in Appendix ??. 1https://www.synapse.org/#!Synapse:syn3193805/wiki/217789 |
| Dataset Splits | Yes | Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Sections 4 and supplemental material. |
| Hardware Specification | Yes | We train all models on a single NVIDIA Ge Force RTX 3090 GPU with 24GB of memory. |
| Software Dependencies | Yes | All our experiments are implemented in Pytorch 1.7.0. |
| Experiment Setup | Yes | We utilize the Adam W optimizer [90] in all our experiments. For training our generator and discriminator, we use a learning rate of 5e 4 with a batch size of 6, and train each model for 300 epochs for all datasets. We set the sampling number n on each feature map and the total iterative number M as 16 and 4, respectively. We also adopt the input resolution and patch size P as 224 224 and 14, respectively. We set λ1 = 0.5, λ2 = 0.5, and λ3 = 0.1 in this experiments. |