GenSeg: On Generating Unified Adversary for Segmentation

Authors: Yuxuan Zhang, Zhenbo Shi, Wei Yang, Shuchang Wang, Shaowei Wang, Yinxing Xue

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the superiority of Gen Seg in black-box attacks compared with state-of-the-art attacks. To evaluate the effectiveness of Gen Seg, we conduct comprehensive experiments on SS, IS, and PS, respectively. We employ 9 datasets and 15 models in total to validate our method.
Researcher Affiliation Academia 1 School of Computer Science and Technology, University of Science and Technology of China 2 Suzhou Institute for Advanced Research, University of Science and Technology of China 3 Hefei National Laboratory, University of Science and Technology of China 4 Institute of Artificial Intelligence and Blockchain, Guangzhou University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Procedures are described using mathematical formulas and descriptive text.
Open Source Code Yes Code is available at: https://github.com/YXZhang979/Gen Seg
Open Datasets Yes To evaluate the attack effectiveness of Gen Seg, we employ the commonly used Pascal VOC (20 classes) [Everingham et al., 2010], Cityscapes (19 classes) [Cordts et al., 2016], and ADE20K (150 classes) [Zhou et al., 2017] for SS. For IS, we use the widely used City Scapes (8 things), COCO (80 things) [Lin et al., 2014], and ADE20K (100 things). As to PS, we adopt COCO (80 things and 53 stuff), Cityscapes (8 things and 11 stuff), and ADE20K (100 things and 50 stuff).
Dataset Splits No The paper mentions using well-known datasets and training models, but it does not explicitly provide specific percentages, sample counts, or predefined citations for training/validation/test dataset splits needed for reproduction.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using specific models and optimizers (e.g., Adam optimizer, ResNet-based model) but does not provide specific software dependencies with version numbers (e.g., 'PyTorch 1.9', 'CUDA 11.1').
Experiment Setup Yes We use the Adam optimizer with a learning rate of 5e-3 (β1 = .5, β2 = .999) for 100 epochs. We set the perturbation budget ϵ to the typical value of 8/255. Besides, we set attack iteration to 5, striking a balance between attack capability and efficiency.