dc.contributor
Universitat Autònoma de Barcelona
dc.contributor
University of Tartu
dc.contributor
Institute of Physiology and Pathology of Hearing
dc.contributor
Hasan Kalyoncu University
dc.contributor
Universitat Oberta de Catalunya (UOC)
dc.contributor.author
Kulkarni, Kaustubh
dc.contributor.author
Corneanu, Ciprian
dc.contributor.author
Ofodile, Ikechukwu
dc.contributor.author
Escalera Guerrero, Sergio
dc.contributor.author
Barò Solè, Xavier
dc.contributor.author
Hyniewska, Sylwia
dc.contributor.author
Allik, Jüri
dc.contributor.author
Anbarjafari, Gholamreza
dc.date
2019-04-15T11:37:16Z
dc.date
2019-04-15T11:37:16Z
dc.identifier.citation
Kulkarni, K., Corneanu, C., Ofodile, I., Escalera Guerrero, S., Baró Solé, X., Hyniewska, S., Allik, J. & Anbarjafari, G. (2018). Automatic recognition of facial displays of unfelt emotions. IEEE Transactions on Affective Computing. doi: 10.1109/TAFFC.2018.2874996
dc.identifier.citation
1949-3045
dc.identifier.citation
2371-9850
dc.identifier.citation
10.1109/TAFFC.2018.2874996
dc.identifier.uri
http://hdl.handle.net/10609/93201
dc.description.abstract
Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datase.
dc.format
application/pdf
dc.publisher
IEEE Transactions on Affective Computing
dc.relation
IEEE Transactions on Affective Computing, 2018
dc.relation
http://arxiv.org/pdf/1707.04061
dc.rights
(c) Author/s & (c) Journal
dc.rights
info:eu-repo/semantics/openAccess
dc.subject
affective computing
dc.subject
facial expression recognition
dc.subject
unfelt facial expression of emotion
dc.subject
human behaviour analysis
dc.subject
computación afectiva
dc.subject
reconocimiento de la expresión facial
dc.subject
expresión facial sin emoción
dc.subject
análisis del comportamiento humano
dc.subject
computació afectiva
dc.subject
reconeixement d'expressió facial
dc.subject
expressió facial sense emoció
dc.subject
anàlisi del comportament humà
dc.subject
Human face recognition (Computer science)
dc.subject
Reconeixement facial (Informàtica)
dc.subject
Reconocimiento facial (Informática)
dc.title
Automatic recognition of facial displays of unfelt emotions
dc.type
info:eu-repo/semantics/article
dc.type
info:eu-repo/semantics/submittedVersion