dc.contributor |
Institut de Robòtica i Informàtica Industrial |
dc.contributor.author |
Orozco, Francisco Javier |
dc.contributor.author |
Roca, Francesc Xavier |
dc.contributor.author |
González, Jordi |
dc.date |
2009 |
dc.identifier.citation |
Orozco, F.; Roca, F.; González, J. Real-time gaze tracking with appearance-based models. "Machine vision and applications", 2009, vol. 20, núm. 6, p. 353-364. |
dc.identifier.citation |
0932-8092 |
dc.identifier.citation |
10.1007/s00138-008-0130-6 |
dc.identifier.uri |
http://hdl.handle.net/2117/6529 |
dc.language.iso |
eng |
dc.relation |
http://dx.doi.org/10.1007/s00138-008-0130-6 |
dc.rights |
info:eu-repo/semantics/openAccess |
dc.subject |
Àrees temàtiques de la UPC::Informàtica::Robòtica |
dc.subject |
Gaze |
dc.subject |
Visió artificial (Robòtica) |
dc.title |
Real-time gaze tracking with appearance-based models |
dc.type |
info:eu-repo/semantics/submittedVersion |
dc.type |
info:eu-repo/semantics/article |
dc.description.abstract |
Psychological evidence has emphasized the
importance of eye gaze analysis in human computer interaction and emotion interpretation. To this end, current image
analysis algorithms take into consideration eye-lid and iris motion detection using colour information and edge detectors.
However, eye movement is fast and and hence difficult to use to obtain a precise and robust tracking. Instead, our method proposed to describe eyelid and iris movements as continuous variables using appearance-based tracking. This approach combines the strengths of adaptive appearance models, optimization methods and backtracking techniques. Thus, in the proposed method textures are learned on-line from near frontal images and illumination changes, occlusions and fast movements are managed. The method achieves real-time performance by combining two appearance-based trackers to a backtracking algorithm for eyelid estimation and another for iris estimation. These contributions represent a significant advance towards a reliable gaze motion description for HCI
and expression analysis, where the strength of complementary methodologies are combined to avoid using high quality images, colour information, texture training, camera settings and other time-consuming processes. |
dc.description.abstract |
Peer Reviewed |