MS Defense: Inertially Constrained Ruled Surfaces for Egomotion Estimation
IRB-4105
In computer vision, camera egomotion is typically solved with visual odometry techniques which relies on feature extraction from a sequence of images and computation of the optical flow. This, however, often requires a point-to-point correspondence between two consecutive frames which can often be costly to compute and its varying accuracy greatly affects the quality of estimated motion. Attempts have been made to bypass the difficulties originated from the correspondence problem by adopting line features and fusing other sensors (event camera, IMU) to improve performance, many of which still heavily rely on feature detectors. If the camera observes a straight line as the it moves, the image of such line is sweeping a surface, this is a ruled surface and analyzing its shapes gives information about egomotion. Inspired by event cameras' capabilities on edge detection, this research presents a novel algorithm to reconstruct 3D scenes with ruled surfaces from which the camera egomotion is computed simultaneously. Constraining the egomotion with the inertia measurements from an onboard IMU sensor, the dimensionality of the solution space is greatly reduced.