Computational modeling of visual attention has been a research field focused on emulating the behavior of biological visual systems in a given scenario, by using mechanisms developed for fixation prediction or salient region detection. In the literature, different approaches have been presented to emulate the interactions that occur in the early vision system of biological structures. However, mathematical modeling of these systems applying theories related to fractional operators could outperform the existing models. In this paper, we present a fractional bio-inspired filter for salient color detection in natural scenarios, based on the behavior and distribution of the cone photoreceptors cells in the retina. The filter was compared with two classic saliency algorithms over a natural color image dataset in terms of saliency prediction and processing time, using a Similarity (SIM) score and runtime performance, respectively. Our approach reach the second best result in therms of saliency prediction with a 48,9% of SIM with ground truth fixations maps and the fastest time response, with an average time of 0.12 s when processing a high resolution image, being 25% faster than Itti et al. algorithm, one of the most applied in robotic vision tasks.
Visual Attention Models are usually tested using collections of natural images that have intentionally salient objects and obvious context information. On the other hand, in the literature, few algorithms have considered datasets with non-context information to modeling attention. Moreover, Visual Attention Models haven’t been well-measured considering both contextless and context-awareness environments. In this paper, we compare some well-known Bottom Up visual attention models performance using contextless and context aware datasets, using the Pearson Correlation Coefficient as a method to assess the efficiency of each Visual Attention Model in terms of accuracy and eye fixations predictions. The best algorithm outperforms the others by reaching 59,1% and 43,8% of correlation with ground truth information in the contextless and context awareness datasets respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.