Variational Laws of Visual Attention for Dynamic Scenes

Authors: Dario Zanca, Marco Gori

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we report experimental results to validate the model in tasks of saliency detection. and To quantitative evaluate how well our model predicts human fixations, we defined an experimental setup for salient detection both in images and in video.
Researcher Affiliation Academia Dario Zanca DINFO, University of Florence DIISM, University of Siena dario.zanca@unifi.it Marco Gori DIISM, University of Siena marco@diism.unisi.it
Pseudocode Yes Algorithm 1 In the psedo-code, P() is the acceptance probability and score() is computed as the average of NSS scores on the sample batch of 100 images.
Open Source Code No The paper does not provide any explicit statements about open-source code release or links to a code repository.
Open Datasets Yes We used images from MIT1003 [1], MIT300 [12] and CAT2000 [11], and video from SFU [27] eye-tracking database.
Dataset Splits Yes Only a batch of a 100 images from CAT2000-TRAIN is used to perform the SA algorithm and A grid search over blur radius and center parameter σ have been used, in order to maximize AUC-Judd and NSS score on the training data of CAT2000 in the case of images, and on SFU in case of videos.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments.
Software Dependencies No The paper states 'we used standard functions of the python scientific library Sci Py [21]' but does not provide a specific version number for SciPy or any other software dependencies.
Experiment Setup Yes For images, we collected data by running the model 199 times, each run was randomly initialized almost at the center of the image and with a small random velocity, and integrated for a running time corresponding to 1 second of visual exploration. and A grid search over blur radius and center parameter σ have been used, in order to maximize AUC-Judd and NSS score on the training data of CAT2000 in the case of images, and on SFU in case of videos.