Motion Estimation from Image and Inertial Measurements
February 24, 2006
Robust motion from monocular image measurements would be an enabling technology for Mars rover, micro air vehicle, and search and rescue robot navigation; and for modeling complex environments from video.
While algorithms exist for estimating six degree of freedom motion from monocular image measurements, motion from image measurements suffers from inherent problems. These include sensitivity to incorrect or insufficient image feature tracking; sensitivity to camera modeling and calibration errors; and long-term drift in scenarios with missing observations, i.e., where image features enter and leave the field of view.
The integration of image and inertial measurements is an attractive solution to some of these problems. Among other advantages, adding inertial measurements to image-based motion estimation can reduce the sensitivity to incorrect image feature tracking and camera modeling errors. On the other hand, image measurements can be exploited to reduce the drift that results from integrating noisy inertial measurements; and allow the additional unknowns needed to interpret inertial measurements, such as the gravity direction and magnitude, to be estimated.