An Unsupervised Information-Theoretic Perceptual Quality Metric
Authors: Sangnie Bhardwaj, Ian Fischer, Johannes Ballé, Troy Chinen
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that PIM is competitive with supervised metrics on the recent and challenging BAPPS image quality assessment dataset and outperforms them in predicting the ranking of image compression methods in CLIC 2020. |
| Researcher Affiliation | Industry | Sangnie Bhardwaj Google Research sangnie@google.com Ian Fischer Google Research iansf@google.com Johannes Ballé Google Research jballe@google.com Troy Chinen Google Research tchinen@google.com |
| Pseudocode | No | The paper includes system diagrams in Figures 1 and 2, but it does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Code available at https://github.com/google-research/perceptual-quality. |
| Open Datasets | Yes | We show that PIM is competitive with supervised metrics on the recent and challenging BAPPS image quality assessment dataset... We also perform qualitative experiments using the Image Net-C dataset... We evaluate the performance of PIM on BAPPS, a dataset of human perceptual similarity judgements (R. Zhang et al., 2018)... CLIC 2020... Image Net-C dataset (Hendrycks and Dietterich, 2019). |
| Dataset Splits | No | The paper evaluates PIM on external datasets like BAPPS and CLIC 2020, but it does not specify the train/validation/test splits for the unsupervised training of PIM itself. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Adam' as an optimization algorithm, but it does not specify any software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | No | The paper states, 'Note that implementation details regarding the optimization (e.g. optimization algorithm, learning rate) and pre-processing of the training dataset can be found in the appendix,' indicating that these specific experimental setup details are not in the provided main text. |