Objective: The use of patient-specific models for surgical simulation requires photorealistic rendering of 3D structure and surface properties. For bronchoscope simulation, this requires augmenting virtual bronchoscope views generated from 3D tomographic data with patient-specific bronchoscope videos. To facilitate matching of video images to the geometry extracted from 3D tomographic data, this paper presents a new pq-space-based 2D/3D registration method for camera pose estimation in bronchoscope tracking.
Methods: The proposed technique involves the extraction of surface normals for each pixel of the video images by using a linear local shape-from-shading algorithm derived from the unique camera/lighting constraints of the endoscopes. The resultant pq-vectors are then matched to those of the 3D model by differentiation of the z-buffer. A similarity measure based on angular deviations of the pq-vectors is used to provide a robust 2D/3D registration framework. Localization of tissue deformation is considered by assessing the temporal variation of the pq-vectors between subsequent frames.
Results: The accuracy of the proposed method was assessed by using an electromagnetic tracker and a specially constructed airway phantom. Preliminary in vivo validation of the proposed method was performed on a matched patient bronchoscope video sequence and 3D CT data. Comparison to existing intensity-based techniques was also made.
Conclusion: The proposed method does not involve explicit feature extraction and is relatively immune to illumination changes. The temporal variation of the pq distribution also permits the identification of localized deformation, which offers an effective way of excluding such areas from the registration process.
pubs.doc.ic.ac.uk: built & maintained by Ashok Argent-Katwala.