Counting from the Air: Vehicle counting per Lane from on board a UAV for real-time traffic monitoring

Christos Kyrkou
4 min readDec 2, 2019

“The flexibility and cost efficiency of traffic monitoring using Unmanned Aerial Vehicles (UAVs) has made such a proposition an attractive topic of research.”

Introduction

Unmanned Aerial Vehicles (drones) are emerging as a promising technology for both environmental and infrastructure monitoring, with broad use in a plethora of applications. Many such applications require the use of computer vision algorithms in order to analyse the information captured from an on-board camera. In particular, Road Traffic Monitoring (RTM) systems constitute a domain where the use of UAVs is receiving significant interest.

In traffic monitoring applications, UAVs can perform vehicle monitoring, without the need for embedded sensors within cars and can be deployed in an area of interest at no additional cost. In this post motivated by the work in [1], we are interested in counting the vehicles that pass at each lane. We are going to use computer vision for this purpose as UAVs usually come equipped with cameras 📷 and make it an affordable and attractive sensor option for our application.

We are going to make a series of assumptions to simplify the problem as it can be rather complicated to develop a one solution fits all purposes. In particular our assumptions are that the camera looks downwards on the road and the UAV is located above the road segment we wish to monitor. Our solution will be based on more traditional computer vision techniques. Sorry, no deep learning yet 😔 although it can be easily incorporated into the pipeline 😅. Below are the main steps of the algorithm.

The Algorithm in a nutshell

The algorithm has 4 main components: background modeling, road segment extraction, moving vehicle detection, and counting. The algorithm is implemented using OpenCV. Each part is further elaborated below:

— A moving average of the image is formed to construct a background image in order to separate between foreground and background objects. We assume that the majority of foreground objects will be vehicles since we are monitoring a road segment.

(Left) Current Frame (Right) Running Average Background

— The road segment is found and extracted in the HSV colorspace. In particular each input image is thresholded with particular values of Hue, Saturation, and Value to determine only the road pixels. Anything outside the road segment and of smaller size is discarded. The Hough line transform is then used to find the upper and lower bound of the road segment. Anything outside these bounds is discarded.

Extracted Road Segment

— By performing a subtraction of the current image with the background the moving blobs are extracted. Then morphological operations are applied to reduce the noise and keep only the larger blobs that correspond to cars. After a bounding box is generated that encloses each blob and will facilitate counting and direction estimation.

A vehicle is characterized by a leading and trailing edge. Both need to go through the center line to count a vehicle.

—To determine the lane to which the vehicle belongs to we use the average direction of the optical flow vectors within each extracted blob box.

The average direction of optical flow vectors is used to determine the direction of vehicles.

Concluding Remarks

Monitoring the state of road networks is fundamental for efficient traffic management. A simple yet effective algorithm for vehicle counting per lane in UAV imagery has been outlined extending their applicability on near real-time applications, such as UAV-based traffic monitoring. Computer Vision has a key role to play in the future of intelligent transportation systems and traffic monitoring thus exploring even better solutions is of critical importance. In particular, deep learning techniques can be used to enhance various aspects of the pipeline [2,3]. In addition, there are also many other problems that need to be considered such as the deployment of such aerial sensors [1].

A video demonstrating the whole solution

Partial Code 💻 used in this project can be found at:

https://github.com/ckyrkou/CNN_Car_Detector

Partially based on:

[1] Christos Kyrkou, Stelios Timotheou, Panayiotis Kolios, Theocharis Theocharides, Christos Panayiotou, “Optimized vision-directed deployment of UAVs for rapid traffic monitoring”, 2018 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, pp. 1–6. January, 2018.

Some more advanced material covering the visual detection process using deep learning can be found in related works:

[2] Christos Kyrkou, George Plastiras, Stylianos Venieris, Theocharis Theocharides, Christos-Savvas Bouganis, “DroNet: Efficient convolutional neural network detector for real-time UAV applications,” 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, pp. 967–972, March 2018.

[3] Alexandros Kourris, Christos Kyrkou, Christos-Savvas .Bouganis, “Informed Region Selection for Efficient UAV-based Object Detectors: Altitude-aware Vehicle Detection with CyCAR Dataset”, IROS 2019

--

--

Christos Kyrkou

Research Lecturer at the KIOS Research & Innovation Center of Excellence at the University of Cyprus. Research in the areas of Computer Vision and Deep Learning