Computing Publications

Publications Home » Active search for real-time vision

Active search for real-time vision

Andrew Davison

Conference or Workshop Paper
IEEE International Conference on Computer Vision
October, 2005
Volume 1
IEEE Computer Society
ISBN 0-7695-2334-X
ISSN 1550-5499
DOI 10.1109/ICCV.2005.29

n most cases when information is to be extracted from an image, there are priors available on the state of the world and therefore on the detailed measurements which are obtained. While such priors are commonly combined with the actual measurements via Bayes' rule to calculate posterior probability distributions on model parameters, their additional value in guiding efficient image processing has almost always been overlooked. Priors tell us where to look for information in an image, how much computational effort we can expect to expend to extract it, and of how much utility to the task in hand it is likely to be. Such considerations are of importance in all practical real time vision systems, where the processing resources available at each frame in a sequence are strictly limited - and it is exactly in high frame rate real time systems such as trackers where strong priors are most likely to be available. In this paper, we use Shannon information theory to analyse the fundamental value of measurements using mutual information scores in absolute units of bits, specifically looking at the overwhelming case where uncertainty can be characterised by Gaussian probability distributions. We then compare these measurement values with the computational cost of the image processing required to obtain them. This theory puts on a firm footing for the first time principles of 'active search' for efficient guided image processing, in which candidate features of possibly different types can be compared and selected automatically for measurement.

PDF of full publication (250 kilobytes)
(need help viewing PDF files?)
BibTEX file for the publication
Conditions for downloading publications from this site. built & maintained by Ashok Argent-Katwala.