Semi-Supervised Blind Image Quality Assessment through Knowledge Distillation and Incremental Learning

Authors: Wensheng Pan, Timin Gao, Yan Zhang, Xiawu Zheng, Yunhang Shen, Ke Li, Runze Hu, Yutao Liu, Pingyang Dai

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that the proposed approach achieves state-of-the-art performance across various benchmark datasets.
Researcher Affiliation Collaboration 1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, Xiamen 361005, China 2Peng Cheng Laboratory, Shenzhen 518066, China 3Tencent Youtu Lab, Shanghai 200233, China 4School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China 5School of Computer Science and Technology, Ocean University of China, Qingdao 266100, China
Pseudocode No The paper describes the methods in prose and with a diagram (Figure 2), but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository.
Open Datasets Yes We evaluate the performance of our proposed model on eight typical image quality evaluation datasets, including four synthetic datasets and four authentic datasets. The synthetic datasets we used are LIVE (Sheikh, Sabir, and Bovik 2006), CSIQ (Larson and Chandler 2010), TID2013 (Ponomarenko et al. 2015), and KADID (Lin, Hosu, and Saupe 2019). The authentic datasets we used are LIVEC (Ghadiyaram and Bovik 2015), Kon IQ (Hosu et al. 2020), LIVEFB (Ying et al. 2020b), and SPAQ (Fang et al. 2020).
Dataset Splits Yes For each dataset, we use 80% of the data for training and 20% for testing. Finally, we evaluate the performance of the student model ΘN on the validation sets of both datasets DA and DB.
Hardware Specification No The paper does not specify the hardware used to run the experiments (e.g., GPU models, CPU types).
Software Dependencies No The paper mentions using the 'Adam optimizer' and 'smooth L1 loss' but does not provide specific version numbers for any software dependencies (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes During training, we use the Adam optimizer and use the smooth L1 loss as the loss function. In the initial phase, we train for 9 epochs on the dataset DA, with 3 warmup epochs and a learning rate of 2 10 4, which decreases by 0.1 every 3 epochs. In the incremental phase, we train for 6 epochs on blocks of the dataset DB, with an initial learning rate of 2 10 5, which decreases by 0.1 every 2 epochs. The dataset DB is divided into 3 blocks.