Causal-IQA: Towards the Generalization of Image Quality Assessment Based on Causal Inference

Authors: Yan Zhong, Xingyu Wu, Li Zhang, Chenxi Yang, Tingting Jiang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments illustrate the superiority of Causal-IQA. We conduct extensive experiments on both authentically and synthetically distorted image databases to validate the effectiveness and generalization capabilities of the proposed method.
Researcher Affiliation Academia 1School of Mathematical Sciences, Peking University, Beijing, China 2National Engineering Research Center of Visual Technology, National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, China 3Department of Computing, The Hong Kong Polytechnic University, Hong Kong SAR, China 4Hefei Institute of Physical Science, Chinese Academy of Sciences, University of Science and Technology of China, Hefei, China 5National Biomedical Imaging Center, Peking University, Beijing, China.
Pseudocode Yes Eventually, according to the updating mode of the median parameters ϕ s, we propose two versions of the optimization algorithm for Causal-IQA method, which are summarized in Algorithm 1 and Algorithm 2 termed as Causal-IQA-S (CIS for short) and Causal-IQA-P (CIP for short) respectively.
Open Source Code No The information is insufficient. The paper provides a link for a CNN visualization code ('The code is available from https://github.com/ sar-gupta/convisualize_nb.') which is a third-party tool, but does not provide open-source code for the Causal-IQA methodology described in the paper itself.
Open Datasets Yes In this paper, we perform experiments on the following five representative IQA databases: TID2013 (Ponomarenko et al., 2015), KADID-10K (Lin et al., 2019), Kon IQ-10K (Hosu et al., 2020), LIVE-C (Ghadiyaram & Bovik, 2015), CID2013 (Virtanen et al., 2014).
Dataset Splits Yes In this part, we train Causal-IQA methods on two synthetically distorted datasets and fine-tune the trained model on three authentically distorted datasets, 80% of which are allocated for fine-tuning with 20% for testing. To assess how well our Causal-IQA generalizes to unfamiliar distortion types, we evaluate our approach by employing Leave-One-Distortion Out (Zhu et al., 2020) cross-validation on the TID2013 and KADID-10K databases.
Hardware Specification Yes The training network is constructed following Meta IQA, which is trained with Pytorch library (Paszke et al., 2017) on two Intel Xeon E5-2609 v4 CPUs and four NVIDIA RTX 2080Ti GPUs.
Software Dependencies No The information is insufficient. The paper mentions the 'Pytorch library (Paszke et al., 2017)' but does not specify the version number of PyTorch or any other relevant software dependencies.
Experiment Setup Yes We use Adam optimizer with β1 = 0.9 and β1 = 0.9999 for both task updating and meta updating, and the learning rates for task updating and meta updating are set as 1e 4 and 1e 2 respectively. The learning rate for task updating drops by 0.8 times after every 10 epochs with the total epoch number set as 100. In the process of fine-tuning, the learning rate of Adam optimizer is also set as 1e 4 with total epoch number 30. The batch sizes B in TID2013 and KADID-10K are set as 32 and 102 for combined training, and B = 64 during test process.