Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Authors: Weixia Zhang, Dingquan Li, Xiongkuo Min, Guangtao Zhai, Guodong Guo, Xiaokang Yang, Kede Ma
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test one knowledge-driven and three data-driven NR-IQA methods under four full-reference IQA models (as approximations to human perception of just-noticeable differences). Through carefully designed psychophysical experiments, we find that all four NR-IQA models are vulnerable to the proposed perceptual attack. More interestingly, we observe that the generated counterexamples are not transferable, manifesting themselves as distinct design flows of respective NR-IQA methods. Source code are available at https://github.com/zwx8981/Perceptual Attack_BIQA. We conduct an extensive experiment to examine four NR-IQA models, the knowledge-driven BRISQUE [17], the shallow learning-based CORNIA [21], as well as the deep learningbased Ma19 [22] and UNIQUE [6] under four FR-IQA models, the Chebyshev distance (i.e., the ℓ -norm induced metric), SSIM, LPIPS, and DISTS (as approximations to human perception of JNDs). |
| Researcher Affiliation | Academia | 1 Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University 2 Network Intelligence Research Department, Peng Cheng Laboratory 3 Department of Computer Science and Electrical Engineering, West Virginia University 4 Department of Computer Science, City University of Hong Kong {zwx8981, minxiongkuo, zhaiguangtao, xkyang}@sjtu.edu.cn, lidq01@pcl.ac.cn guodong.guo@mail.wvu.edu, kede.ma@cityu.edu.hk |
| Pseudocode | Yes | Algorithm 1 Perceptually Imperceptible Counterexample Generation |
| Open Source Code | Yes | Source code are available at https://github.com/zwx8981/Perceptual Attack_BIQA. |
| Open Datasets | Yes | We use the training codes provided by the original authors to re-train BRISQUE and CORNIA on LIVE [7], Ma19 [22] on our own collected dataset4, and UNIQUE on six human-rated IQA databases [7, 63, 64, 9, 5, 65]. We collect twelve images as initializations from the publicly available LIVE IQA database [7] (see Fig. 3)... |
| Dataset Splits | No | The paper states which datasets were used for training different models (e.g., "re-train BRISQUE and CORNIA on LIVE [7]"), but it does not specify the train/validation/test splits (e.g., percentages or sample counts) used for this training or for their own experiments beyond selecting "twelve initial images" from LIVE. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details with version numbers (e.g., "Python 3.8, PyTorch 1.9"). It mentions "training codes provided by the original authors" but no versions. |
| Experiment Setup | Yes | For each of sixteen combinations of NR-IQA and FR-IQA models, and each of the twelve initial images, we set λ to 32 values, and optimize the objective in Eq. (2) to generate 32 perturbed images... We set the step size γ to 10 3 and the maximum number of iterations to 200, respectively. As suggested by the BT. 500 recommendations [69], we carry out the experiments in an indoor office environment with a normal lighting condition (with approximately 200 lux) and without reflecting ceiling walls and floors. The peak luminance of the displayed images is mapped to 200 cd/m2. We recruit fifteen human subjects (with normal or corrected-to-normal vision) to participate in the psychophysical experiment, viewing the image pairs from a fixed distance of twice the screen height. |