

In this article, you’ll learn about an advanced Discover easy Vehicle detection using OpenCV and Python in just 5 minutes! Simplify your coding journey and unlock quick, efficient solutions! The article will guide you in using the YOLOv3 model in conjunction with OpenCV-python. OpenCV is a Python real-time computer vision library.
Let’s start with the basics here first;
Nowadays, video Object detection is used in a range of industries. Some of the uses include surveillance, sports broadcasting, and robot navigation. The good news is that there are no restrictions on future use cases for video object detection and tracking. Here are some of the most interesting applications you may rely on. counting the crowd, automatic license plate recognition system, sports ball tracking, robotics, and traffic management.
Let’s have a look at intriguing object detection use cases in real-world applications.
Nowadays, video object detection is being used in a variety of sectors. Surveillance, sports broadcasting, and robot navigation are among the applications.
The good news is that the options are limitless regarding future use cases for video object detection and tracking. Here are some of the most fascinating applications you can count on:
Counting the crowd
Detection and recognition of Vehicle license plates (as discussed in the article)
Sports ball tracking
Robotics
Traffic management

YOLO is an acronym that stands for You Only Look Once. It is an object recognition algorithm that operates in real time. It is capable of classifying and localizing several objects in a single frame.
Due to its smaller network topology, YOLO is an extremely quick and accurate algorithm. It mostly employs the following strategies:
It essentially splits an image into NxN grids.
The model receives one grid cell at a time. YOLO calculates the likelihood that the cell has a specific class, and the class with the highest probability is chosen.
IOU is a statistic that examines the intersection of the predicted and actual bounding boxes. A non-max suppression (NMS) technique eliminates the close bounding boxes by executing the IoU with the box with the highest-class probability.
The YOLO network comprises 24 convolutional layers that are followed by two fully connected layers. The convolutional layers are trained on the ImageNet classification algorithm at half the resolution (224 x 224 input picture) before being trained for detection.
The layers minimize the feature set from previous layers, alternate 11 reduction layers, and 33 convolutional layers. The final four layers are added to train the network to detect objects.
The last layer forecasts the object class and bounding box probabilities. To interact with YOLO directly, we’ll use OpenCV’s DNN module. DNN is an abbreviation for Deep Neural Network, and OpenCV includes a function for running DNN algorithms.
In this project, we will detect and classify Vehicle, HMV (Heavy Motor Vehicle), and LMV (Light Motor Vehicle), on the road, as well as count the number of Vehicles on the road. The data will be saved to examine various automobiles on the road.
To complete this project, we will develop two programs. The first will be a Vehicle detection tracker that uses OpenCV to keep track of every identified Vehicle on the road, and the second will be the primary detection of Vehicles.
Prerequisites for the OpenCV vehicle detection system and classification project.
Python - version 3.x (We used python 3.8.8 in this project)OpenCV - version 4.4.0 DNN models should be executed on GPU whenever possible.
You should download the OpenCV Vehicle detection and classification source code if you haven’t already. Now, let’s begin!
The tracker uses the Euclidean distance to maintain track of an item. It computes the distance between two center points of an object in the current frame and the previous frame, and if the distance is smaller than the threshold distance, it certifies that the object in the previous frame is the same as the current frame.
Import the relevant packages and start the network.
Retrieve frames from a video file.
Run the detection after pre-processing the frame.
Perform post-processing on the output data.
Count and track all Vehicles on the route.
Save the completed data as a CSV file.

confThreshold =0.1nmsThreshold= 0.2 First, you import all of the project’s required packages.
Then, from the tracker program, you initialize the EuclideanDistTracker() object and set the object to “tracker.”
confThreshold and nmsThreshold are the detection and suppression minimal confidence score thresholds, respectively.
# Middle cross line positionmiddle_line_position = 225up_line_position = middle_line_position - 15down_line_position = middle_line_position + 15 You need to modify the middle_line_position according to your need.
Cap.read() reads each frame from the capture object after reading the video file using the video capture object.
You cut your frame in half by using cv2.reshape().
The crossing lines are drawn in the frame using the cv2.line() function.
Finally, you display the generated image using the cv2.imshow() function.
This YOLO version accepts 320x320 image objects as input. The network’s input is a blob object. The function dnn.blobFromImage() accepts an image as input and returns a blob object that has been shrunk and normalized.
The image is fed onto the network using net.forward(), and it produces a result.
Finally, you invoke the custom postProcess() function to post-process the output.

The forward network output has three outputs. Each output object is an 85-length vector.
Four times the bounding box (centerX, centerY, width, height)
One confidence box
80x class assurance
Let’s start by defining the post-processing function.
After receiving all detections, you use the tracker object to keep track of those things. The tracker. update() function maintains track of all identified objects and updates their positions.
The custom function count_vehicle counts the number of vehicles that have passed through the road.
Create two empty lists (temporary ones) for storing the Vehicle IDs entering the entry crossing line.
List for store vehicle count information
To count four Vehicle classes in the up and down routes are Up_list and down_list.
The center point of a rectangle box is returned through the find_center function.
You open a new file, data.csv, with write permission only using the open function.
Then, you write three rows: the first with class names and directions, the second with up and down route counts, and the third with both.
The writerow() function saves a row of data to a file.
Congratulations! Using OpenCV, you have created a sophisticated Vehicle detection system and classification system for this project.
You have used the YOLOv3 algorithm with OpenCV to recognize and classify objects. In addition, you learned about deep neural networks, file systems, and advanced computer vision algorithms.

https://youtu.be/DXZ8qW0-AHg


