A tight coupling of vision-lidar measurements for effective odometry

Published in IV-19, 2019

We propose a new way of fusing the visual and lidar measurements for effective odometry. When to combine visual measurements with lidar measurements, it is typical to assign the depth information from lidar measurements to seemingly corresponding, visual features or to utilize the results of visual odometry as the motion initial guess for lidar odometry. Instead, our method 1) builds two maps in different modalities: a lidar voxel map and a visual map with map-points and 2) use them together when to solve the odometry residuals. And the results of the odometry residual optimization are used to keep the lidar voxel map to be globally consistent with the operating environment. By using visual- lidar measurements in such a tightly-coupled way, our method estimates the odometry more accurately as incorporating more geometric constraints into the odometry residuals, and removes any chances of errors in assigning the depths of lidars to non- corresponding visual features. Experimental results show that our method outperforms the visual (i.e., the front-end of the ORB-SLAM2), and lidar odometry, and depth-enhanced visual odometry.

Download paper here

Youngwoo Seo and Chih-Chung Chou, A tight-coupling of vision-lidar measurements for an effective odometry, In Proceedings of the 30th IEEE Intelligent Vehicles Symposium (IV-2019), pp. 990-995, 2019.