ACL-Net: Semi-supervised Polyp Segmentation via Affinity Contrastive Learning

Authors: Huisi Wu, Wende Xie, Jingyin Lin, Xinrong Guo

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on five benchmark datasets, including Kvasir-SEG, CVC-Clinic DB, CVC-300, Colon DB and ETIS, demonstrate the effectiveness and superiority of our method.
Researcher Affiliation Academia College of Computer Science and Software Engineering, Shenzhen University hswu@szu.edu.cn
Pseudocode No The paper describes the proposed method using textual explanations and mathematical equations (e.g., Equation 1 to 14) but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Codes are available at https://github.com/xiewende/ACL-Net.
Open Datasets Yes Extensive experiments on five benchmark datasets, including Kvasir-SEG (Jha et al. 2020), CVC-Clinic DB (Bernal et al. 2015), CVC-300 (V azquez et al. 2017), Colon DB (Bernal, S anchez, and Vilarino 2012) and ETIS (Silva et al. 2014).
Dataset Splits No The paper states that 'a total of 1450 images... are divided into different labeled partition protocols (1/2, 1/4, 1/8) as our semi-supervised training datasets, and all above five datasets will be used in the inference phase'. While this describes training and inference, it does not explicitly specify a distinct validation dataset split or its size/proportion for hyperparameter tuning or early stopping.
Hardware Specification Yes Our proposed method is implemented with the Py Torch framework on a single NVDIA Ge Force RTX 3090TI.
Software Dependencies No The paper states 'Our proposed method is implemented with the Py Torch framework' but does not specify a version number for PyTorch or any other software dependencies like CUDA or specific library versions required for replication.
Experiment Setup Yes The initial learning rate is set to 0.001, while the batch size is set to 8. We used a stochastic gradient descent (SGD) optimizer for training with a weight decay of 0.0001. We unified all images resolution to 384 × 384. ... both the loss function weights λtra and λcon are experimentally set to 0.5. The weight of EMA is set to 0.999. We set the background scores βl = 0.45 and βh = 0.75 in Equation 2. The temperature parameter τ is 0.5 in Equation 4. In Equation 9 and Equation 10, we set the weight factors (w1, w2, w3) as (0.2, 0.2, 0.5) respectively. The weights (α1,α2) in Equation 11 are set to (0.3, 0.7).