How saccadic models help predict where we look during a visual task? Application to visual quality assessment - Université de Nantes Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

How saccadic models help predict where we look during a visual task? Application to visual quality assessment

Résumé

In this paper, we present saccadic models which are an alternative way to predict where observers look at. Compared to saliency models, saccadic models generate plausible visual scan-paths from which saliency maps can be computed. In addition these models have the advantage of being adaptable to different viewing conditions, viewing tasks and types of visual scene. We demonstrate that saccadic models perform better than existing saliency models for predicting where an observer looks at in free-viewing condition and quality-task condition (i.e. when observers have to score the quality of an image). For that, the joint distributions of saccade amplitudes and orientations in both conditions (i.e. free-viewing and quality task) have been estimated from eye tracking data. Thanks to saccadic models, we hope we will be able to improve upon the performance of saliency-based quality metrics, and more generally the capacity to predict where we look within visual scenes when performing visual tasks.
Fichier principal
Vignette du fichier
LeMeur_SPIE2016.pdf (1.87 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01391750 , version 1 (03-11-2016)

Identifiants

  • HAL Id : hal-01391750 , version 1

Citer

Olivier Le Meur, Antoine Coutrot. How saccadic models help predict where we look during a visual task? Application to visual quality assessment. SPIE Image Quality And System Performance, Feb 2016, San Fransisco, United States. ⟨hal-01391750⟩
387 Consultations
214 Téléchargements

Partager

Gmail Facebook X LinkedIn More