In the recent years particle filtering has been the dominant paradigm for tracking facial and body features, recognizing temporal events and reasoning in uncertainty. A major problem associated with it is that its performance deteriorates drastically when the dimensionality of the state space is high. In this paper, we address this problem when the state space can be partitioned in groups of random variables whose likelihood can be independently evaluated. We introduce a novel proposal density which is the product of the marginal posteriors of the groups of random variables. The proposed method requires only that the interdependencies between the groups of random variables (i.e. the priors) can be evaluated and not that a sample can be drawn from them. We adapt our scheme to the problem of multiple template-based tracking of facial features. We propose a color-based observation model that is invariant to changes in illumination intensity. We experimentally show that our algorithm clearly outperforms multiple independent template tracking schemes and auxiliary particle filtering that utilizes priors.
pubs.doc.ic.ac.uk: built & maintained by Ashok Argent-Katwala.