CamoDiffusion: Camouflaged Object Detection via Conditional Diffusion Models

Authors: Zhongxi Chen, Ke Sun, Xianming Lin

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three COD datasets attest to the superior performance of our model compared to existing state-of-the-art methods, particularly on the most challenging COD10K dataset, where our approach achieves 0.019 in terms of MAE.
Researcher Affiliation Academia Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China {chenzhongxi, skjack}@stu.xmu.edu.cn, linxm@xmu.edu.cn
Pseudocode No The paper describes the model architecture and processes using text and diagrams (Fig. 2, Fig. 3) but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Codes and models are available at https://github.com/ Rapisurazurite/Camo Diffusion.
Open Datasets Yes Our Camo Diffusion is evaluated on three widely-used COD datasets: CAMO (Le et al. 2019), COD10K (Fan et al. 2021a), and NC4K (Lv et al. 2021).
Dataset Splits No The paper mentions training and evaluating on CAMO, COD10K, and NC4K datasets, but it does not explicitly provide specific percentages or sample counts for train, validation, or test splits. It only states the total image counts for some datasets (e.g., 'COD10K contains 5,066 camouflaged, 3,000 background, and 1,934 noncamouflaged images').
Hardware Specification Yes We implemented our model based on Py Torch using an NVIDIA A100 for both training and inference.
Software Dependencies No The paper mentions 'Py Torch' as the implementation framework but does not specify its version number or any other software dependencies with their specific version numbers.
Experiment Setup Yes input images are resized to 384 × 384. For optimization, the Adam W was utilized along with a batch size set to 32. To adjust the learning rate, we implemented the cosine strategy with an initial learning rate of 0.001 for 170 epochs.