Computing Publications

Publications Home » Facial Action Unit Recognition us...

Facial Action Unit Recognition using Temporal Templates

Michel Valstar, Ioannis Patras, Maja Pantic

Conference or Workshop Paper
IEEE Int'l Workshop on Human-Robot Interaction 2004
September, 2004
pp.253–258
IEEE
Abstract

Automatic recognition of human facial expressions is a challenging problem with many applications in human-computer interaction. Most of the existing facial expression analyzers succeed only in recognizing a few emotional facial expressions, such as anger or happiness. Instead of being another approach to automatic detection of prototypic facial expressions of emotion, this work attempts to measure a large range of facial behavior by recognizing facial action units (AUs, i.e. atomic facial signals) that produce expressions. The proposed system performs AU recognition using temporal templates as input data. Temporal templates are 2D images, constructed from image sequences, which show where and when motion in the image sequence has occurred. A two-stage learning machine, combining a k-Nearest-Neighbor (kNN) algorithm and a rule-based system, performs the recognition of 15 AUs occurring alone or in combination in an input face image sequence. Each rule utilized for recognition of a given AU (or a given AU combination) is based on the presence of a specific temporal template in a particular facial region, in which the presence of facial muscle activity characterizes the AU (or AU combination) in question. When trained and tested on the Cohn-Kanade face image database, the proposed method achieved an average recognition rate of 76.2%

PDF of full publication (844 kilobytes)
(need help viewing PDF files?)
BibTEX file for the publication
N.B.
Conditions for downloading publications from this site.
 

pubs.doc.ic.ac.uk: built & maintained by Ashok Argent-Katwala.