Towards Adversarially Robust Deep Image Denoising

Authors: Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y. F. Tan

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets, including Set68, Poly U, and SIDD, corroborate the effectiveness of OBSATK and HAT.
Researcher Affiliation Collaboration Hanshu Yan1 , Jingfeng Zhang2 , Jiashi Feng3 , Masashi Sugiyama2,4 and Vincent Y. F. Tan5,1 1ECE, NUS 2RIKEN-AIP 3Byte Dance Inc. 4GSFS, UTokyo 5Math, NUS
Pseudocode Yes Algorithm 1 OBSATK Input: Denoiser fθ( ), ground-truth x, noisy observation y, adversarial budget ρ, #iterations T, step-size η, minimum pixel value pmin, maximum pixel value pmax Output: Adversarial perturbation δ 1: δ 0 2: for t = 1 to T do 3: δ δ + η δ f ϵ θ(y + δ) x 2 2; 4: δ δ (δ n/ n 2 2)n where n is in (4a) 5: δ min(ρ/ δ 2, 1)δ; 6: end for 7: δ Clip(y + δ, pmin, pmax) y
Open Source Code No Please refer to the full-length paper [Yan et al., 2022] for the appendices, proofs, and codes. This statement refers to the paper itself for the code, but the provided PDF does not contain an explicit code block or a direct link to a repository for the described methodology.
Open Datasets Yes Extensive experiments on benchmark datasets, including Set68, Poly U, and SIDD, corroborate the effectiveness of OBSATK and HAT. For gray-scale image denoising, we use Train400 to train a Dn CNN-B [Zhang et al., 2017] model... For RGB image denoising, we use BSD432 (BSD500 excluding images in BSD68) to train a Dn CNN-C model... Poly U [Xu et al., 2018], CC [Xu et al., 2017], and SIDD [Abdelhamed et al., 2018].
Dataset Splits No The paper describes training datasets (e.g., Train400, BSD432, SIDD-small) and test datasets (e.g., Set12, Set68, Poly U, CC, SIDD-val) but does not explicitly provide details about a separate validation set or its split information (percentages, sample counts, or specific pre-defined splits).
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers, such as programming language versions or library versions (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes The noise levels σ are uniformly randomly selected from [0, ϵ] with ϵ = 25/255. For gray-scale image denoising, we use Train400 to train a Dn CNN-B [Zhang et al., 2017] model, which consists of 20 convolutional layers. We follow the training setting in Zhang et al. [2017] and randomly crop 128 3000 patches in size of 50 50. The number of iterations of PGD in OBSATK is set to be five. We train deep denoisers with the HAT strategy and set α to be 1, and use one-step Atk-5/255 to generate adversarially noisy images for training. In practice, we set the value of ρ of OBSATK to be 5/255 m, where m denotes the size of image patches. The value of α of HAT is kept unchanged as 2. For DIP and N2S... the numbers of iterations for each image are set to be 2,000 and 1,000, respectively.