Use of a Monocular Camera to Analyze Lateral Motions of
a Self-Driving Vehicle
October 23, 2012 2:00 PM
A self-driving car, to be deployed in urban areas, should be capable of keeping itself in a road-lane boundary. This prerequisite of reliable autonomous driving requires that the vehicle be able to detect road-lane boundaries and understand the geometry of the road-lane happens to be driving on. To detect the boundaries of a road-lane, we can install, in a down-looking fashion, a rig of LIDARs or a 3D LIDAR to scan lane-markings on road surface. It is impractical, however, to use those sensors solely for detecting lane-markings. We can install, alternatively, a vision sensor for lane-marking detection it is cheaper and easier to install. To the best of our knowledge, most vision research regarding this topic concerns merely detecting lane-markings under various illumination conditions. These systems fall short of yielding the information necessary to making lane-marking detection results useful to other sub-systems. For example, to be useful, lane-marking detection results should produce information about the distance between a road-lane boundary and the body of the car. In this talk, I will present a vision algorithm that analyzes perspective images from a monocular camera, to produce information about lateral movements, such as metric offset measurements from road-lane boundaries and detection of lane-crossing maneuvers.
This page is
maintained by Yumi Yi (firstname.lastname@example.org).