Enabling Mobile Robots to Know Where They Can and Cannot Drive

Published:

Robust Matching for More Accurate Feature Correspondences For any indirect, visual SLAM solutions, to estimate the relative camera motion between two consecutive images, it is critical to find ``correct’’ correspondence between features extracted from those images. Given a set of feature correspondents, one can use a n-point algorithm with robust M-estimators, to produce the best estimate to the relative camera pose. The accuracy of a motion estimate is heavily dependent upon the accuracy of the feature correspondence. Such a dependency is even more significant when features are extracted from the images of the scenes with drastic changes in viewpoints and illuminations, and presence of occlusions. To make a feature matching robust to such challenging scenes, we propose a new feature matching method that incrementally chooses a five pairs of matched features for a full DoF (Degree of Freedom) camera motion estimation. In particular, at the first stage, we use our 2-point algorithm to estimate a camera motion, and at the second stage, use this estimated motion to choose three more matched features. In addition, for more accurate outlier rejection, we use, instead of the epipolar constraint, a planar constraint. With this set of five matching features, we estimate a full DoF camera motion with scale ambiguity. Through the experiments with three real-world datasets, our method demonstrates its effectiveness and robustness by successfully matching features 1) from the images of a night market where presence of frequent occlusions and varying illuminations, 2) from the images of a night market taken by a handheld camera and by the Google street view, and 3) from the images of a same location taken daytime and nighttime. Read the following paper to learn more about this work:


Tracking Traversable Region Bounary This work presents a new method of detecting and tracking the boundaries of drivable regions in road without road-markings. As unmarked roads connect residential places to public roads, a capability of autonomously driving on such roadways is important to truly realize self-driving cars in daily driving scenarios. To detect the left and right boundaries of drivable regions, our method samples the image region at the front of ego-vehicle and uses the appearance information of that region to identify the boundary of the drivable region from input images. Due to the variation in the image acquisition condition, the image features necessary for boundary detection may not be present. When this happens, a boundary detection algorithm working frame-by-frame basis would fail to successfully detect the boundaries. To effectively handle these cases, our method tracks, using a Bayes filter, the detected boundaries over frames. Experiments using real-world videos show promising results. Read the following paper to learn more about this work:


Lanemarking Detection In advanced driver assistance systems and self-driving cars, many computer vision applications rely on knowing the location of the vanishing point on a horizon. The horizontal vanishing point’s location provides important information about driving environments, such as the instantaneous driving direction of roadway, sampling regions of the drivable regions’ image features, and the search direction of moving objects. To detect the vanishing point, many existing methods work frame-by-frame. Their outputs may look optimal in that frame. Over a series of frames, however, the detected locations are inconsistent, yielding unreliable information about roadway structure. This work studys a novel algorithm that, using lines, detects vanishing points in urban scenes and, using Extended Kalman Filter (EKF), tracks them over frames to smooth out the trajectory of the horizontal vanishing point. The study demonstrates both the practicality of the detection method and the effectiveness of our tracking method, through experiments carried out using thousands of urban scene images. Read the following papers to learn more about this work:


Analyzing Ortho-Images to Generate Lane-Level Maps Maps are important for both human and robot navigation. Given a route, driving assistance systems consult maps to guide human drivers to their destinations. Similarly, topological maps of a road network provide a robotic vehicle with information about where it can drive and what driving behaviors it should use. By providing the necessary information about the driving environment, maps simplify both manual and autonomous driving. The majority of existing cartographic databases are built, using manual surveys and operator interactions, to primarily assist human navigation. Hence, the resolution of existing maps is insufficient for use in robotics applications. Also, the coverage of these maps fails to extend to places where robotics applications require detailed geometric information. To augment the resolution and coverage of existing maps, this work investigates computer vision algorithms to automatically build lane-level detailed maps of highways and parking lots by analyzing publicly available cartographic resources such as orthoimagery. Read the following papers to learn more about this work: