Abstract:
|
Monocular simultaneous localization and mapping(SLAM) techniques implicitly estimate camera ego-motion while incrementally building a map of the environment. In monocular SLAM, when the number of features in the system state increases, maintaining a real-time operation becomes very difficult. However, it is easy to remove old features from the state to maintain a stable computational cost per frame. If features are removed from the map, then previously mapped areas cannot be recognized to minimize the robot’s drift; alternatively, in the context of a real-time virtual sensor that emulates typical sensors as laser for range measurements and encoders for dead reckoning, this limitation should not be a problem. In this paper, a novel framework is proposed to build in real time a consistent map of the environment using the virtual-sensor estimations. At the same time, the proposed approach allows minimizing the drift of the camera-robot position. Experiments with real data are presented
to show the performance of this frame of work. |