Understanding Image Impressiveness Inspired by Instantaneous Human Perceptual Cues

Authors: Jufeng Yang, Yan Sun, Jie Liang, Yong-Liang Yang, Ming-Ming Cheng

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we propose a novel image property, called impressiveness, that measures how images impress people with a short-term contact. This is based on an impression-driven model inspired by a number of important human perceptual cues. To achieve this, we first collect three datasets in various domains, which are labeled according to the instantaneous sensation of the annotators. Then we investigate the impressiveness property via six established human perceptual cues as well as the corresponding features from pixel to semantic levels. Sequentially, we verify the consistency of the impressiveness which can be quantitatively measured by multiple visual representations, and evaluate their latent relationships. Finally, we apply the proposed impressiveness property to rank the images for an efficient image recommendation system.
Researcher Affiliation Academia Jufeng Yang,1 Yan Sun,1 Jie Liang,1 Yong-Liang Yang,2 Ming-Ming Cheng1 1College of Computer and Control Engineering, Nankai University, No.38 Tongyan Road, Tianjin, China 2Department of Computer Science, University of Bath, Claverton Down, Bath, United Kingdom
Pseudocode No The paper describes methods textually but does not include any explicit pseudocode blocks or algorithm listings.
Open Source Code No The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The authors created their own datasets (Flickr Imp, News Imp, Trip Imp) and mention that 'The example images from the three datasets are shown in the supplemental material.' However, there is no specific link, DOI, or clear statement indicating public availability of the full datasets for reproducibility.
Dataset Splits No The paper mentions 'We train classifiers on 10 random splits for each of the three datasets' but does not provide specific details on the percentages or counts for training, validation, and test splits, nor does it provide a random seed for reproducibility of these splits.
Hardware Specification No The paper does not include any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used for running the experiments.
Software Dependencies No The paper mentions various techniques and tools (e.g., LBP, HOG, Senti Bank, Object Bank, Caffe Net, Emo Net, MKL), but it does not specify version numbers for any of these software components, which is required for reproducible description.
Experiment Setup Yes We examine the gaussian kernel with variances [1 3 5 7 10 12 15 17 20] and the polynomial with degrees [1 2 3 4] for efficient performance.