UDACITY SDCE Nanodegree: Term 1- Project 4: Advanced Lane Finding!

Through this project an algorithmic pipeline was developed capable of tracking the road lane-lines and localizing the position of the vehicle with respect to them

The goals / steps of this project were the following:

  • Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
  • Apply a distortion correction to raw images.
  • Use color transforms, gradients, etc., to create a thresholded binary image.
  • Apply a perspective transform to rectify binary image (“birds-eye view”).
  • Detect lane pixels and fit to find the lane boundary.
  • Determine the curvature of the lane and vehicle position with respect to center.
  • Warp the detected lane boundaries back onto the original image.
  • Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.

Camera Calibration

Image for post
Image for post
Calibration Result on Chessboard Images
Image for post
Image for post
Calibration Result on Road Images

The objpoints and imgpoints arrays are used in the cv2. calibrateCamera function to extract the camera matrix (mtx) and distortion coefficients (dist). Calibration camera parameter results are saved in apickle file and are loaded when the lane tracking pipeline python file is executed. Distortion correction to test images using the cv2.undistort() function provides these results.

Detection Pipeline

Image for post
Image for post
Algorithm for lane histogram detection

The process of identifying the lane-line pixels is implemented through functions lane_histogram_detection(). The former employees the sliding window approach that exhaustively searchers for line pixels, while the latter uses a faster approach that assumes the lines are located in a similar position and skips the sliding window. The sliding window process runs every 30 frames or in the event that one of the two lines is not detected in the current frame by the faster approach. After detecting the lines the polynomials fitted using the pixels corresponding to the left and right lines. Overall the pipeline stages are visualized below.

Image for post
Image for post

Radius of Curvature and Vehicle Position

The radius of curvature is computed for both lines and the average curvature is reported. The radius values are corrected by multiplying the polynomial coefficients with the term ym_per_pix which was chosen to be 30/image_height (corresponding to 30 meters per 720 pixels).

The position of the vehicle with respect to the center is computed in function find_distance_from_center() which receives as input the two line objects and image dimensions. First, the midpoint between the two lines is found, which is the average position of the bottom pixels of each line, and subtract from that the image midpoint. This is multiplied with the term ym_per_pix which was chosen to be 3.7/px_distance_between_lines (corresponding to 3.7 meters per 560 pixels). If the resulting value is negative then the vehicle is located to the right of the lane, while if it is left it is located to the left.

Video Demonstrating the proposed processing pipeline

Discussion

Overall, the pipeline seems to work well on the project video and test images. However, the edge detection process used for the lane line detection can be easily confused differences in the color of the road and markings (as in the challenge video). This is something that requires more consideration. Perhaps using brightness normalization techniques and a mask that also discards the middle part of the road can help towards this direction.

PhD in Computer Engineering, Self-Driving Car Engineering Nanodegree, Computer Vision, Visual Perception and Computing

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store