An Integrated Model for Effective Saliency Prediction

Authors: Xiaoshuai Sun, Zi Huang, Hongzhi Yin, Heng Tao Shen

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on artificial images and several benchmark dataset demonstrate the superior performance and better plausibility of the proposed model over both classic approaches and recent deep models.
Researcher Affiliation Academia Xiaoshuai Sun,1,2 Zi Huang,1 Hongzhi Yin,1 Heng Tao Shen1,3 1The University of Queensland, Brisbane 4067, Australia. 2Harbin Institute of Tehcnology, Heilongjiang 150001, China. 3University of Electronic Science and Technology of China, Chengdu 611731, China.
Pseudocode Yes Algorithm 1 Maxima Normalization Nmax(S, t) Input: 2D intensity map S, thresh of local maxima t = 0.1 Output: Normalized Saliency Map SN 1: Set the number of maxima NM = 0 2: Set the sum of the maxima VM = 0 3: Set Global Maxima GM = max(S) 4: for all pixel (x, y) of S do 5: if S(x, y) > t then 6: R = {S(i, j)|i = x 1, x + 1, j = y 1, y + 1} 7: if S(x, y) > max(R) then 8: VM = VM + S(x, y) 9: NM = NM + 1 10: end if 11: end if 12: end for 13: SN = S (GM VM/NM)2/GM 14: return Normalized map SN
Open Source Code No The paper mentions that the DPN model (a baseline) is open source, but it does not provide any statement or link for the open-source code of the proposed SCA model.
Open Datasets Yes We used the data from SALICON dataset to train our SAS net. SALICON is currently the largest dataset available for saliency prediction task, which provides 10K images for training and 5K images for testing. [...] SALICON (Jiang et al. 2015).
Dataset Splits Yes We train our network based on 9K images from the training set, and use the rest 1K for validation.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions using VGG and Fast ICA but does not provide specific version numbers for software dependencies used in its implementation, nor does it list programming language or library versions.
Experiment Setup Yes For training, we adopted stochastic gradient descent with Euclidean loss using a batch size of 2 images for 24K iterations. L2 weight regularizer was used for weight decay and the learning rate was halved after every 100 iterations.