Deep Camouflage Images
Authors: Qing Zhang, Gelin Yin, Yongwei Nie, Wei-Shi Zheng12845-12852
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show the advantages of our approach over existing camouflage methods and state-of-the-art neural style transfer algorithms. In this section, we perform various experiments to validate the effectiveness of the proposed approach. We first compare our approach against existing methods. Then we conduct ablation studies to evaluate the effectiveness of the loss components and the attention in our algorithm. |
| Researcher Affiliation | Academia | Qing Zhang,1 Gelin Yin,1 Yongwei Nie,2 Wei-Shi Zheng,1,3,4 1School of Data and Computer Science, Sun Yat-sen University, China 2School of Computer Science and Engineering, South China University of Technology, China 3Peng Cheng Laboratory, Shenzhen 518005, China 4The Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Our code will be made publicly available at http://zhangqing-home.net/. |
| Open Datasets | No | No concrete access information (link, DOI, repository, formal citation with author/year, or reference to established benchmark datasets) for a publicly available or open dataset was found. The paper mentions collecting 28 background images and 17 foreground objects but provides no access details for this dataset. |
| Dataset Splits | No | No specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning was found. |
| Hardware Specification | Yes | All our experiments were conducted on a PC with an NVIDIA 1080Ti GPU. |
| Software Dependencies | No | The paper states 'Our algorithm was implemented in Py Torch (Paszke et al. 2017)' but does not provide a specific version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | Similar to (Gatys, Ecker, and Bethge 2016), our results are generated based on the pre-trained VGG-19 (Simonyan and Zisserman 2014). conv4 1 is used in the camouflage loss, while conv1 1, conv2 1, conv3 1 and conv4 1 are chosen for the style loss. α1 ℓ= 1, α2 ℓ= 1 and βℓ= 1.5 are set for selected convolutional layers in the respective losses, and are set as zeros for other unselected layers. The parameters λcam = 10 6, λreg = 10 9 and λtv = 10 3 are used to produce all our results, which works well for most cases. We employ the L-BFGS solver (Liu and Nocedal 1989) for image reconstruction. |