Automatic Detection of Vehicular Traffic Elements based on Deep Learning for Advanced Driving Assistance Systems

Laura Cleofas-Sánchez, Juan Pablo Francisco Posadas-Durán, Pedro Martínez-Ortiz, Gilberto Loyo-Desiderio, Eduardo Alberto Ruvalcaba-Hernández, Omar González Brito

Abstract


This paper presents a prototype of an automobile driver assistance system based on YOLOv3. The system detects car types, traffic signs, and traffic lights in real-time and warns the driver accordingly. In the learning phase of the YOLO algorithm, the standard weights are learned first, followed by transfer learning to the objects of interest. The retraining phase uses 2,800 images obtained from the Internet of three countries of the real-life, and the testing phase uses real-time videos of Mexico City roads. In the validation phase, the proposed system achieves 95%, 37%, and 40% performance on the compiled dataset for the detection of road elements. The results obtained are comparable and in some cases better than those reported in previous works. Using a Raspberry Pi 4, the prototype was tested in real-life, generating visual and audible warnings for the driver, with an object recognition rate of 0.4 fps. A mean average precision (mAP) of 53% was reached by the proposed system. The experiments showed that the prototype achieved a poor recognition rate and required high computational processing for object recognition. However, YOLO is a model that can have good performance on low-resource hardware.

Keywords


YOLOv3, automobile detection assistance, object recognition, deep learning

Full Text: PDF