Enabling computer systems to recognize human facial expressions is a challenging research problem with many applications in behavioral science, medicine, security, and human-machine interaction. Instead of being another approach to automatic detection of prototypic facial expressions of emotion, this work attempts to analyze subtle changes in facial behavior by recognizing facial action units (AUs, i.e. atomic facial signals) that produce expressions. This paper proposes AU recognition based upon multilevel motion history images (MMHIs), which can be seen as an extension to temporal templates introduced by Bobick and Davis. By recording motion history at multiple time intervals (i.e., multilevel MHIs) instead of recording it once for the entire image sequence, we overcome the problem of self-occlusion which is inherent to temporal templates original definition. For automatic classification of an input MMHI-represented face video in terms of 21 AU classes, two approaches are compared: a Sparse Network of Winnows (SNoW) and a standard k-Nearest Neighbour (kNN) classifier. The system was tested on two different databases, the MMI-Face-DB developed by the authors and the Cohn-Kanade face database.
pubs.doc.ic.ac.uk: built & maintained by Ashok Argent-Katwala.