- Open Topics
The members of the CV-Lab (Computer Vision) develop algorithms that allow our cars to interpret their environment using video cameras.
Our autonomous cars will conduct visual perceptive and cognitive activities, such as the recognition of pedestrians, the estimation of their own velocity and position, or the measurement of their relative distance to static and dynamic objects.
Our interest in developing computer vision algorithms is twofold: we want to incorporate human-like perception to our driving models and to substitute cameras for expensive sensors, such as laser scaners and radars.
Camera calibration algorithms estimate the camera parameters (focal length, principal point and relative position) used by the camera to project world objects into images. We developed an online method that computes the camera position relative to the ground using the optical flow in consecutive video frames. We assume that the camera is mounted in the car which moves forward and sideways on a flat area to acquire the images used for the calibration.
Smart cameras run optimal computer vision algorithms in programmable devices before the images are delivered to our main computer. They are inspired by the visual processing system of the human eye that conducts basic image processing in the retina. Our architecture bases on implementations in FPGA to avoid the bottlenecks in some algorithms that require low or high level computationally intensive methods, such as image scaling or depth and optical flow estimation.
Our stereo camera system estimates three-dimensional information from two or more views of environment. The system resembles biological stereopsis that perceives the sensation of depth from the projection of the world into two retinas. We use a stereo system to estimate the location of objects and their distances in the environment. These measurements can be applied to the analysis of traffic situations or to three-dimensional mapping.
We are developing and applying pattern recognition methods for detection of static and dynamic objects in the environment. These recognition systems are a necessary component in driver assistance systems and autonomous vehicles.
Lane detection is maybe one of the best known recognition methods of static objects used in the automotive industry. We extended lane detection systems to statistical models that we use to improve our cars’ GPS self-localization estimation.
We are developing several modules that recognize in real time more complex dynamic objects in image sequences. We trained an optimal car classifier that uses the camera parameters to compute reliably the relative distance of our autonomous vehicle to other cars. We are extending these classification models to detect and track persons and to interpret their intentions to avoid harmful situations.
Our cars can detect and interpret traffic lights. We are currently extending these classifiers to detect another traffic signs and moving objects. Thus, our cars will interpret complex traffic situations and design optimal self-driving strategies.