Computing Publications

Publications Home » Detecting facial actions and thei...

Detecting facial actions and their temporal segments in nearly frontal-view face image sequences

Maja Pantic, Ioannis Patras

Conference or Workshop Paper
IEEE Int'l Conf. on Systems, Man and Cybernetics 2005
October, 2005
pp.3358–3363
IEEE
ISBN 0-7803-9298-X
Abstract

The recognition of facial expressions in image sequences is a difficult problem with many applications in human-machine interaction. Facial expression analyzers achieve good recognition rates, but virtually all of them deal only with prototypic facial expressions of emotions and cannot handle temporal dynamics of facial displays. The method presented here attempts to handle a large range of human facial behavior by recognizing facial action units (AUs) and their temporal segments (i.e., onset, apex, offset) that produce expressions. We exploit particle filtering to track 20 facial points in an input face video and we introduce AU-dynamics recognition using temporal rules. When tested on Cohn-Kanade and MMI facial expression databases, the proposed method achieved a recognition rate of 90% when detecting 27 AUs occurring alone or in a combination in an input face image sequence.

PDF of full publication (983 kilobytes)
(need help viewing PDF files?)
BibTEX file for the publication
N.B.
Conditions for downloading publications from this site.
 

pubs.doc.ic.ac.uk: built & maintained by Ashok Argent-Katwala.