Toward End-to-End Car License Plate Detection and Recognition With Deep Neural Networks

Toward End-to-End Car License Plate Detection and Recognition With Deep Neural Networks

ABSTRACT:

In this paper, we tackle the problem of car license plate detection and recognition in natural scene images. We propose a unified deep neural network, which can localize license plates and recognize the letters simultaneously in a single forward pass. The whole network can be trained end-to-end. In contrast to existing approaches which take license plate detection and recognition as two separate tasks and settle them step by step, our method jointly solves these two tasks by a single network. It not only avoids intermediate error accumulation but also accelerates the processing speed. For performance evaluation, four data sets including images captured from various scenes under different conditions are tested. Extensive experiments show the effectiveness and the efficiency of our proposed approach.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Hui Li , Peng Wang, and Chunhua Shen, “Toward End-to-End Car License Plate Detection and Recognition With Deep Neural Networks”, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2019.

Hybrid Cascade Structure for License Plate Detection in Large Visual Surveillance Scenes

Hybrid Cascade Structure for License Plate Detection in Large Visual Surveillance Scenes

ABSTRACT:

Though license plate detection has been successfully applied in some commercial products, the detection of small and vague license plates in real applications is still an open problem. In this paper, we propose a novel hybrid cascade structure for fast detecting small and vague license plates in large and complex visual surveillance scenes. For rapid license plate candidate extraction, we propose two cascade detectors, including the Cascaded Color Space Transformation of Pixel detector and the Cascaded Contrast-Color Haar-like detector; these two cascade detectors can do coarse-to-fine detection in the front and in the middle of the hybrid cascade. In the end of the hybrid cascade, we propose a cascaded convolutional network structure (Cascaded ConvNet), including two detection- ConvNets and a calibration-ConvNet, which is designed to do fine detection. Through experiments with different evaluation data sets with many small and vague plates, we show that the proposed framework is able to rapidly detect license plates with different resolutions and different sizes in large and complex visual surveillance scenes.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 gb

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Chunsheng Liu and Faliang Chang, “Hybrid Cascade Structure for License Plate Detection in Large Visual Surveillance Scenes”, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2019.

Efficient Scale-Adaptive License Plate Detection System

Efficient Scale-Adaptive License Plate Detection System

ABSTRACT:

License plate detection is a common problem in traffic surveillance applications. Although some solutions have been proposed in the literature, their success is usually restricted to very specific scenarios, with their performance dropping in more demanding conditions. One of the main challenges to be addressed for this kind of systems is the varying scale of the license plates, which depends on the distance between the vehicles and the camera. Traditionally, systems have handled this issue by sequentially running single-scale detectors over a pyramid of images. This approach, although simplifies the training process, requires as many evaluations as considered scales, which leads to running times that grow linearly with the number of scales considered. In this paper, we propose a scale-adaptive deformable part-based model which, based on a well-known boosting algorithm, automatically models scale during the training phase by selecting the most prominent features at each scale and notably reduces the test detection time by avoiding the evaluation at different scales. In addition, our method incorporates an empirically constrained-deformation model that adapts to different levels of deformation shown by distinct local features within license plates. As shown in the experimental section, the proposed detector is robust and scale and perspective independent and can work in quite diverse scenarios. Experiments on two datasets show that the proposed method achieves a significantly better performance in comparison with other methods of the state of the art.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Miguel Molina-Moreno , Iván González-Díaz , Member, IEEE, and Fernando Díaz-de-María, Member, IEEE , “Efficient Scale-Adaptive License Plate Detection System”, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2019.

Traffic Light Recognition With High Dynamic Range Imaging and Deep Learning

Traffic Light Recognition With High Dynamic Range Imaging and Deep Learning

ABSTRACT:

Traffic light recognition (TLR) detects the traffic light from an image and then estimates the state of the light signal. TLR is important for autonomous vehicles because running against a red light could cause a deadly car accident. For a practical TLR system, computation time, varying illumination conditions, and false positives are three key challenges. In this paper, a novel real-time method is proposed to recognize a traffic light with high dynamic imaging and deep learning. In our approach, traffic light candidates are robustly detected from low exposure/dark frames and accurately classified using a deep neural network in consecutive high exposure/bright frames. This dual-channel mechanism can make full use of undistorted color and shape information in dark frames as well as the rich context in bright frames. In the dark channel, a non-parametric multicolor saliency model is proposed to simultaneously extract lights with different colors. A multiclass classifier with convolutional neural network (CNN) model is then adopted to reduce the number of false positives in the bright channel. The performance is further boosted by incorporating temporal trajectory tracking. In order to speed up the algorithm, a prior detection mask is generated to limit the potential search regions. Intensive experiments on a large dual-channel dataset show that the proposed approach outperforms the state-of-the-art real-time deep learning object detector, which could cause more false positives because it uses bright images only. The algorithm has been integrated into our autonomous vehicle and can work robustly on real roads.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Jian-Gang Wang , Senior Member, IEEE, and Lu-Bing Zhou, “Traffic Light Recognition With High Dynamic Range Imaging and Deep Learning”, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2019.

Recognizing Distractions for Assistive Driving by Tracking Body Parts

Recognizing Distractions for Assistive Driving by Tracking Body Parts

ABSTRACT:

Busy life as well as prevalence of infotainment is increasingly making people more occupied even during tasks that require serious attention. One such task is driving and at the same time getting involved in activities that may distract them cognitively from watching the road and cause fatal accidents. This paper presents a method that is capable of monitoring different types of distractions such as talking and texting on cell phone, casual eating and operating cabin equipment while driving, so that a driver can be assisted to remain cautious on road. The proposed method automatically detects and tracks fiducial body parts of a driver from video captured by a camera mounted on the front windshield inside a vehicle. Relative distances between the tracking trajectories are used as features that represent actions of the driver. Then, the well-known kernel support vector machine is applied for recognizing a particular distraction from the features extracted from body parts. The proposed feature is also compared with previously employed features for tracking based human action recognition schemes to substantiate its better result in terms of mean accuracy and robustness for distraction recognition. The effectiveness of the proposed method of distraction recognition is also analyzed with respect to tracking errors.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Tashrif Billah, S. M. Mahbubur Rahman_, M. Omair Ahmad, Fellow, IEEE, and M. N. S. Swamy, Fellow, IEEE , “Recognizing Distractions for Assistive Driving by Tracking Body Parts”, IEEE TRANSACTIONS ON CIRCUITS & SYSTEMS FOR VIDEO TECHNOLOGY, 2019.

Real-Time Traffic Sign Recognition Based on Efficient CNNs in the Wild

Real-Time Traffic Sign Recognition Based on Efficient CNNs in the Wild

ABSTRACT:

Both unmanned vehicles and driver assistance systems require solving the problem of traffic sign recognition. A lot of work has been done in this area, but no approach has been presented to perform the task with high accuracy and high speed under various conditions until now. In this paper, we have designed and implemented a detector by adopting the framework of faster R-convolutional neural networks (CNN) and the structure of MobileNet. Here, color and shape information have been used to refine the localizations of small traffic signs, which are not easy to regress precisely. Finally, an efficient CNN with asymmetric kernels is used to be the classifier of traffic signs. Both the detector and the classifier have been trained on challenging public benchmarks. The results show that the proposed detector can detect all categories of traffic signs. The detector and the classifier proposed here are proved to be superior to the state-of-the-art method. Our code and results are available online.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Jia Li and Zengfu Wang, Member, IEEE , “Real-Time Traffic Sign Recognition Based on Efficient CNNs in the Wild”, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2019.

On-line vehicle detection at nighttime-based tail-light pairing with saliency detection in the multi-lane intersection

On-line vehicle detection at nighttime-based tail-light pairing with saliency detection in the multi-lane intersection

ABSTRACT:

A nighttime vehicle detection method in the multi-lane intersection has been proposed based on saliency detection for traffic surveillance system in this study. Frame difference method is applied to detect moving objects at first, and all the rear lights of vehicles are extracted based on saliency map and colour information. Second, vehicles are detected through pairing off all the lamps, which include such steps as rechecking tail-lamp pairs by using prior knowledge, eliminating repaired tail-lamps on the same vehicle and removing the paired lamps across two lanes. Furthermore, to detect the vehicles that only have a single valid tail-lamp, a proving approach for virtual tail-lamp is investigated. Finally, the proposed method is verified to be more reliable and faster for nighttime vehicle detection by comparing with other detection methods, which can satisfy real-time requirements of a vehicle detection system with good performance.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Fei Gao , Yisu Ge, Shufang Lu, Yuanming Zhang, “On-line vehicle detection at nighttime-based tail-light pairing with saliency detection in the multi-lane intersection”, IEEE IET Intelligent Transport Systems 2019.

Fast approach for efficient vehicle counting

Fast approach for efficient vehicle counting

ABSTRACT:

Systems for counting vehicles should be fast enough to be implemented in real-time situations. Most of the related work uses two stages for vehicle counting, vehicle detection and tracking, which increase the computational complexity. In this Letter, a fast and efficient approach for vehicle counting is proposed, where there is no need for the vehicle tracking step. A background model is created only for a narrow region, a line, in the video frames. The moving vehicles are detected as foreground objects while passing this narrow region. Morphological processes are applied to the extracted objects to enhance them and decrease the effects of vehicle occlusions. Finally, an efficient counting vehicles method is introduced employing only the extracted detection information. The experimental results performed on diverse videos show that the proposed method is fast and accurate. The average execution time per frame is 7.78 ms.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

M.A. Abdelwahab, “Fast approach for efficient vehicle counting”, IEEE 2019.

Driver Activity Recognition for Intelligent Vehicles: A Deep Learning Approach

Driver Activity Recognition for Intelligent Vehicles: A Deep Learning Approach

ABSTRACT:

Driver decisions and behaviors are essential factors that can affect the driving safety. To understand the driver behaviors, a driver activities recognition system is designed based on the deep convolutional neural networks (CNN) in this study. Specifically, seven common driving activities are identified, which are the normal driving, right mirror checking, rear mirror checking, left mirror checking, using in-vehicle radio device, texting, and answering the mobile phone, respectively. Among these activities, the first four are regarded as normal driving tasks, while the rest three are classified into the distraction group. The experimental images are collected using a low-cost camera, and ten drivers are involved in the naturalistic data collection. The raw images are segmented using the Gaussian mixture model (GMM) to extract the driver body from the background before training the behavior recognition CNN model. To reduce the training cost, transfer learning method is applied to fine tune the pre-trained CNN models. Three different pre-trained CNN models, namely, AlexNet, GoogLeNet, and ResNet50 are adopted and evaluated. The detection results for the seven tasks achieved an average of 81.6% accuracy using the AlexNet, 78.6% and 74.9% accuracy using the GoogLeNet and ResNet50, respectively. Then, the CNN models are trained for the binary classification task and identify whether the driver is being distracted or not. The binary detection rate achieved 91.4% accuracy, which shows the advantages of using the proposed deep learning approach. Finally, the real-world application are analysed and discussed.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 gb

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Yang Xing, Chen Lv, Member, IEEE, Huaji Wang, Dongpu Cao, Member, IEEE, Efstathios Velenis, Fei-Yue Wang, Fellow, IEEE, “Driver Activity Recognition for Intelligent Vehicles: A Deep Learning Approach”, IEEE 2019.

Compressed-domain Highway Vehicle Counting By Spatial and Temporal Regression

Compressed-domain Highway Vehicle Counting By Spatial and Temporal Regression

ABSTRACT:

Counting on-road vehicles in the highway is fundamental for intelligent transportation management. This paper presents the first highway vehicle counting method in compressed domain, aiming at achieving comparable estimation performance with the pixel-domain methods. Counting in compressed domain is rather challenging due to limited information about vehicles and large variance in vehicle numbers. To address this problem, we develop new low-level features to mitigate the challenge from insufficient information in compressed videos. The new proposed features can be easily extracted from the coding-related metadata. Then we propose a Hierarchical Classification based Regression (HCR) model to estimate the number of vehicles from the compressed-domain low-level features for individual frame. HCR hierarchically divides the traffic scenes into different cases according to the density of vehicles such that the large variance of traffic scenes can be effectively captured. Beside the spatial regression in each frame, we propose a locally temporal regression model to further refine the counting results, which exploits the continuous variation characteristics of the traffic flow. We extensively evaluate the proposed method on real highway surveillance videos. The experimental results consistently show that the proposed method is very competitive compared with the pixel-domain methods, which can reach similar performance with much lower computational cost.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 gb

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Zilei Wang, Member, IEEE, Xu Liu, Jiashi Feng, Jian Yang, Senior Member, IEEE, and Hongsheng Xi, “Compressed-domain Highway Vehicle Counting By Spatial and Temporal Regression”, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019.