The recognition of facial expressions in image sequences is a difficult problem with many applications in human-machine interaction. Facial expression analyzers achieve good recognition rates, but virtually all of them deal only with prototypic facial expressions of emotions and cannot handle temporal dynamics of facial displays. The method presented here attempts to handle a large range of human facial behavior by recognizing facial action units (AUs) and their temporal segments (i.e., onset, apex, offset) that produce expressions. We exploit particle filtering to track 20 facial points in an input face video and we introduce AU-dynamics recognition using temporal rules. When tested on Cohn-Kanade and MMI facial expression databases, the proposed method achieved a recognition rate of 90% when detecting 27 AUs occurring alone or in a combination in an input face image sequence.
pubs.doc.ic.ac.uk: built & maintained by Ashok Argent-Katwala.