Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation

Authors: Maksym Yatsura, Kaspar Sakmann, N. Grace Hua, Matthias Hein, Jan Hendrik Metzen

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In extensive experiments we show that DEMASKED SMOOTHING can on average certify 63% of the pixel predictions for a 1% patch in the detection task and 46% against a 0.5% patch for the recovery task on the ADE20K dataset.
Researcher Affiliation Collaboration Maksym Yatsura1,2 , Kaspar Sakmann1, N. Grace Hua1, Matthias Hein2,3, Jan Hendrik Metzen1 1Bosch Center for Artificial Intelligence, Robert Bosch Gmb H, 2University of Tübingen, 3Tübingen AI Center
Pseudocode Yes We present the Demasked Smoothing procedure in Algorithm 1.
Open Source Code No The paper uses and cites third-party model implementations and a checkpoint for an inpainting model, but it does not provide its own source code for the DEMASKED SMOOTHING method described.
Open Datasets Yes We evaluate DEMASKED SMOOTHING on two challenging semantic segmentation datasets: ADE20K (Zhou et al., 2017) ... and COCO-Stuff-10K (Caesar et al., 2018).
Dataset Splits Yes ADE20K (Zhou et al., 2017) (150 classes, 2000 validation images) and COCO-Stuff-10K (Caesar et al., 2018) (171 classes, 1000 validation images).
Hardware Specification Yes We run the evaluation in parallel on 5 Nvidia Tesla V100-32GB GPUs.
Software Dependencies No The paper refers to using specific models like ZITS (Dong et al., 2022) and the mmsegmentation framework (Contributors, 2020), but it does not specify explicit version numbers for programming languages or libraries (e.g., Python, PyTorch versions).
Experiment Setup Yes We set the patch size to 1% of the image surface. ... As an optimizer we use projected gradient descent (PGD) with 1000 steps and initial step size of 0.01. We use cosine step size schedule and momentum for the gradient with the rate of 0.9.