Beyond Universal Saliency: Personalized Saliency Prediction with Multi-task CNN

Authors: Yanyu Xu, Nianyi Li, Junru Wu, Jingyi Yu, Shenghua Gao

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments demonstrate that our new PSM model and prediction scheme are effective and reliable. and 5 Experiments 5.1 Experimental Setup Parameters. We implement our solution on the CAFFE framework [Jia et al., 2014]. and The performance of all methods are listed in Table 2.
Researcher Affiliation Collaboration Yanyu Xu1 , Nianyi Li2,3, Junru Wu1, Jingyi Yu1,3, and Shenghua Gao1 1Shanghai Tech University, Shanghai, China. 2University of Delaware, Newark, DE, USA. 3 Plex-VR digital technology Co., Ltd.
Pseudocode No The paper describes the architecture of the Multi-task CNN in text and via Figure 4, but it does not include any labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any specific link or statement indicating that the source code for their method is publicly available.
Open Datasets Yes To do so, we first analyze existing datasets. ... 1,100 images are chosen from existing saliency detection datasets including SALICON [Jiang et al., 2015], Image Net [Russakovsky et al., 2015], i SUN [Xu et al., 2015], OSIE[Xu et al., 2014], PASCAL-S [Li et al., 2014]
Dataset Splits No In our experiments, we randomly select 600 images ar training data, and use the rest 1,000 images for testing. The paper does not explicitly mention a separate validation split or its size.
Hardware Specification No The paper states 'We implement our solution on the CAFFE framework [Jia et al., 2014]' but does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for conducting the experiments.
Software Dependencies No The paper mentions implementing the solution on 'the CAFFE framework [Jia et al., 2014]' but does not specify any version numbers for CAFFE or other software dependencies.
Experiment Setup Yes We train our network with the following hyper-parameters setting: mini-batch size (40), learning rate (0.0003), momentum (0.9), weight decay (0.0005), and number of iterations (40,000).