Knowledge Guided Semi-supervised Learning for Quality Assessment of User Generated Videos

Authors: Shankhanil Mitra, Rajiv Soundararajan

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct several experiments on multiple cross and intra datasets to validate the performance of our proposed framework. In this section, we describe the implementation details, experimental setup, and comparisons with other methods.
Researcher Affiliation Academia Visual Information Processing Lab, Indian Institute of Science, Bengaluru {shankhanilm, rajivs}@iisc.ac.in
Pseudocode No The paper describes algorithms and processes in narrative text and with the aid of figures, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Source codes and checkpoints are available at https://github.com/Shankhanil006/SSL-VQA.
Open Datasets Yes We learn our self-supervised ST-VQRL model on a set of synthetically distorted UGC videos. We randomly sample 200 videos out of 28056 training videos of LIVE-FB Large-Scale Social Video Quality (LSVQ) (Ying et al. 2021) database. ... we use a set of 60 pristine videos from LIVE-VQA (Seshadrinathan et al. 2010), LIVE Mobile (Moorthy et al. 2012), CSIQ VQD (Vu and Chandler 2014), EPFL-Po Li MI (De Simone et al. 2010) and ECCVEVVQ databases (Rimac-Drıje, Vranje, and ˇZagar 2010). ... We evaluate SSL-VQA by finetuning it for specific VQA tasks. In general, we adapt our model on four smaller VQA datasets viz. Ko NVid-1k (Hosu et al. 2017), LIVE VQC (Sinno and Bovik 2019), You Tube-UGC (Wang, Inguva, and Adsumilli 2019), and LIVE Qualcomm (Ghadiyaram et al. 2018).
Dataset Splits No The paper mentions training on a subset of LSVQ and using other datasets for testing and finetuning, but it does not explicitly specify a separate validation dataset or split percentages for hyperparameter tuning. While it mentions 'λc, and λu are chosen to be 1 based on training loss convergence', it does not define a validation set used for this convergence monitoring.
Hardware Specification Yes SSL-VQA and all other benchmarking methods were trained in Python 3.8 using Pytorch 2.0 on a 3 24 GB NVIDIA RTX 3090 GPU.
Software Dependencies Yes SSL-VQA and all other benchmarking methods were trained in Python 3.8 using Pytorch 2.0 on a 3 24 GB NVIDIA RTX 3090 GPU.
Experiment Setup Yes We train ST-VQRL using Adam W (Loshchilov and Hutter 2019) with a learning rate of 10 4 and a weight decay of 0.05 for 30 epochs. The temperature co-efficient τ mentioned in Equation (2) is 10. In both the cases we train the model for 30 epochs using Adam W (Loshchilov and Hutter 2019) with a learning rate of 10 4 and a weight decay of 0.05. λc, and λu are chosen to be 1 based on training loss convergence.