Boundary-Guided Camouflaged Object Detection

Authors: Yujia Sun, Shuo Wang, Chenglizhao Chen, Tian-Zhu Xiang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three challenging benchmark datasets demonstrate that our BGNet significantly outperforms the existing 18 state-of-the-art methods under four widely-used evaluation metrics. Our code is publicly available at: https://github.com/thograce/BGNet. 4 Experiments 4.1 Implementation Details 4.2 Datasets 4.3 Evaluation Metrics 4.4 Comparison with State-of-the-arts 4.5 Ablation Study
Researcher Affiliation Collaboration Yujia Sun1 , Shuo Wang2 , Chenglizhao Chen3 , Tian-Zhu Xiang4 1School of Computer Science, Inner Mongolia University, China 2ETH Zurich, Switzerland 3College of Computer Science and Technology, China University of Petroleum, China 4Inception Institute of Artificial Intelligence, UAE
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. It illustrates module architectures in figures but does not provide step-by-step algorithms in a code-like format.
Open Source Code Yes Our code is publicly available at: https://github.com/thograce/BGNet.
Open Datasets Yes We evaluate our method on three public benchmark datasets: CAMO [Le et al., 2019], COD10K [Fan et al., 2020a] and NC4K [Lv et al., 2021]. We follow the previous works [Fan et al., 2020a], which use the training set of CAMO and COD10K as our training set, and use their testing set and NC4K as our testing sets.
Dataset Splits No The paper mentions using training and testing sets, stating: 'We follow the previous works [Fan et al., 2020a], which use the training set of CAMO and COD10K as our training set, and use their testing set and NC4K as our testing sets.' However, it does not explicitly provide details about a validation dataset split or its size/percentage.
Hardware Specification Yes Accelerated by an NVIDIA Tesla P40 GPU, the whole training takes about 2 hours with 25 epochs.
Software Dependencies No The paper states 'We implement our model with PyTorch' but does not specify a version number for PyTorch or any other software dependencies needed for replication.
Experiment Setup Yes We resize all the input images to 416 x 416 and augment them by randomly horizontal flipping. During the training stage, the batch size is set to 16, and the Adam optimizer [Kingma and Ba, 2014] is adopted. The learning rate is initialized to 1e-4 and adjusted by poly strategy with the power of 0.9. Accelerated by an NVIDIA Tesla P40 GPU, the whole training takes about 2 hours with 25 epochs.