Inferring Camouflaged Objects by Texture-Aware Interactive Guidance Network

Authors: Jinchao Zhu, Xiaoyu Zhang, Shuo Zhang, Junnan Liu3599-3607

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Sufficient experiments conducted on COD and SOD datasets demonstrate that the proposed method performs favorably against 23 state-of-the-art methods.
Researcher Affiliation Academia 1 Institute of Robotics and Automatic Information System, College of Artificial Intelligence, Nankai University, Tianjin, China 2 Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China 3 College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, China
Pseudocode No The paper describes methods using text and equations but does not include an explicitly labeled pseudocode or algorithm block.
Open Source Code No The paper does not provide any statement or link indicating the release of open-source code for the described methodology.
Open Datasets Yes We combine the training datasets of CAMO-Train, CPD1K-Train, COD10K-Train and take them as the COD training dataset, which follows SINet (Fan et al. 2020a). We use DUTS-TR (Wang et al. 2017) as training dataset for SOD.
Dataset Splits No The paper extensively discusses training and testing datasets, but does not explicitly mention a distinct validation dataset split or its use for model tuning or evaluation during training.
Hardware Specification Yes We train the model on a PC with 16GB RAM and an RTX 2080Ti GPU.
Software Dependencies No The paper mentions 'Res Net-50 (He et al. 2016) are adopted as backbone' but does not provide specific software dependencies with version numbers like deep learning frameworks (e.g., PyTorch, TensorFlow) or Python.
Experiment Setup Yes Warm-up and linear decay strategies are used. The maximum learning rate is 5e-3 for the backbone and 0.05 for other parts. Stochastic gradient descent is adopted to train the network with the momentum of 0.9 and the weight decay of 5e-4. Batchsize and maximum epoch are set to 32 and 45 respectively. We resize images to 352 x 352 in the inference stage.