Low-Cost Deep Learning-Based Prototype for Automatic Identification of Traffic Signs in Vehicles

  1. González, Enol García 1
  2. Villar, José R. 1
  3. de la Cal, Enrique 1
  1. 1 Universidad de Oviedo
    info

    Universidad de Oviedo

    Oviedo, España

    ROR https://ror.org/006gksa02

Libro:
16th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2021)

ISSN: 2194-5357 2194-5365

ISBN: 9783030878689 9783030878696

Año de publicación: 2021

Páginas: 91-100

Tipo: Capítulo de Libro

DOI: 10.1007/978-3-030-87869-6_9 GOOGLE SCHOLAR

Resumen

The automotive industry has evolved in recent years with new driving assistance systems, which make it evolve towards autonomous driving, in which the figure of the driver becomes less relevant over time until it becomes unnecessary to have a person driving the vehicle. To continue with the development of autonomous cars, information surrounding the vehicle is necessary. As a contribution to this evolution towards autonomous driving, a low-cost prototype is presented, which can be installed in any type of car, capable of capturing images of driving employing a camera and processing information from traffic signs on the road. This information can be used as input for another system, in which, thanks to this already processed signal information, it can make other types of driving decisions. This prototype has been made using a set of 5 Deep Learning classifiers and implemented. As a result of the work, a prototype capable of detecting and classifying images with an accuracy of over 90% has been obtained. However, this prototype is not yet usable as it is not adapted to all environmental situations, for which future studies are proposed.

Información de financiación

This research has been funded by the Spanish Ministry of Science and Innovation under project MINECO-TIN2017-84804-R, PID2020-112726RB-I00.

Financiadores

Referencias bibliográficas

  • Cireşan, D., Meier, U., Masci, J., Schmidhuber, J.: Multi-column deep neural network for traffic sign classification. Neural Netw. 32, 333–338 (2012). https://doi.org/10.1016/j.neunet.2012.02.023
  • Fairfield, N., Urmson, C.: Traffic light mapping and detection. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 5421–5426 (2011). https://doi.org/10.1109/ICRA.2011.5980164
  • Geng, K., Dong, G., Yin, G., Hu, J.: Deep dual-modal traffic objects instance segmentation method using camera and LIDAR data for autonomous driving. Remote Sens. 12(20), 3274 (2020). https://doi.org/10.3390/rs12203274. https://www.mdpi.com/2072-4292/12/20/3274
  • Heinzler, R., Schindler, P., Seekircher, J., Ritter, W., Stork, W.: Weather influence and classification with automotive Lidar sensors. In: Proceedings of the IEEE Intelligent Vehicles Symposium, June 2019, vol. 2019, pp. 1527–1534. Institute of Electrical and Electronics Engineers Inc. (2019). https://doi.org/10.1109/IVS.2019.8814205
  • Jonsson, R., Kollmats, A.: Unintended lane departure prediction using neural networks. Master’s thesis in System, Control and Mechatronic/Complex Adaptive Systems. Ph.D. thesis, Chalmers, Gothernburg (2018)
  • Jung, J.W., Leu, V.Q., Do, T.D., Kim, E.K., Choi, H.H.: Adaptive PID speed control design for permanent magnet synchronous motor drives. IEEE Trans. Power Electron. 30(2), 900–908 (2015). https://doi.org/10.1109/TPEL.2014.2311462
  • Kaempchen, N., Schiele, B., Dietmayer, K.: Situation assessment of an autonomous emergency brake for arbitrary vehicle-to-vehicle collision scenarios. IEEE Trans. Intell. Transp. Syst. 10(4), 678–687 (2009). https://doi.org/10.1109/TITS.2009.2026452
  • Kumar, V.R., Klingner, M., Yogamani, S., Milz, S., Fingscheidt, T., Mäder, P.: SynDistNet: self-supervised monocular fisheye camera distance estimation synergized with semantic segmentation for autonomous driving. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 61–71 (2021)
  • Kumpakeaw, S., Dillmann, R.: Semantic road maps for autonomous vehicles. In: Berns, K., Luksch, T. (eds.) Autonome Mobile Systeme 2007. Informatik aktuell. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74764-2_32
  • Li, D., Zhao, D., Zhang, Q., Chen, Y.: Reinforcement learning and deep learning based lateral control for autonomous driving (October 2018). http://arxiv.org/abs/1810.12778
  • Li, P., Qin, T., Shen, S.: Stereo vision-based semantic 3D object and ego-motion tracking for autonomous driving. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds) Computer Vision, ECCV 2018. Lecture Notes in Computer Science, vol. 11206. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01216-8_40
  • Liu, L., Li, H., Dai, Y., Pan, Q.: Robust and efficient relative pose with a multi-camera system for autonomous driving in highly dynamic environments. IEEE Trans. Intell. Transp. Syst. 19(8), 2432–2444 (2018). https://doi.org/10.1109/TITS.2017.2749409
  • Liu, X., Jin, G., Wang, Y., Yin, C.: A deep learning-based approach to line crossing prediction for lane change maneuver of adjacent target vehicles. In: 2021 IEEE International Conference on Mechatronics (ICM), pp. 1–6. IEEE (March 2021). https://doi.org/10.1109/ICM46511.2021.9385665. https://ieeexplore.ieee.org/document/9385665/
  • Maturana, D., Chou, P.W., Uenoyama, M., Scherer, S.: Real-time semantic mapping for autonomous off-road navigation. In: Hutter, M., Siegwart, R. (eds.) Field and Service Robotics. Springer Proceedings in Advanced Robotics, vol. 5. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-67361-5_22
  • Nie, L., Guan, J., Lu, C., Zheng, H., Yin, Z.: Longitudinal speed control of autonomous vehicle based on a self-adaptive PID of radial basis function neural network. In: IET Intelligent Transport Systems. vol. 12, pp. 485–494. Institution of Engineering and Technology (August 2018). https://doi.org/10.1049/iet-its.2016.0293
  • Rosén, E.: Autonomous emergency braking for vulnerable road users. In: IRCOBI Conference, pp. 618–627 (2013). http://www.ircobi.org/wordpress/downloads/irc13/pdf_files/71.pdf
  • SAE International: Taxonomy and definitions for terms related to on-road motor vehicle automated driving systems. Technical report, On-Road Automated Driving (ORAD) committee (2014). https://doi.org/10.4271/J3016_201401. https://saemobilus.sae.org/content/j3016_201401
  • Sharma, S., Tewolde, G., Kwon, J.: Behavioral cloning for lateral motion control of autonomous vehicles using deep learning. In: IEEE International Conference on Electro Information Technology, May 2018, vol. 2018, pp. 228–233. IEEE Computer Society (October 2018). https://doi.org/10.1109/EIT.2018.8500102
  • Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: The German traffic sign recognition benchmark: a multi-class classification competition. In: The 2011 International Joint Conference on Neural Networks, pp. 1453–1460. IEEE (July 2011). https://doi.org/10.1109/IJCNN.2011.6033395. http://ieeexplore.ieee.org/document/6033395/
  • Yuan, Y., Zhang, J.: A novel initiative braking system with nondegraded fallback level for ADAS and autonomous driving. IEEE Trans. Ind. Electron. 67(6), 4360–4370 (2020). https://doi.org/10.1109/TIE.2019.2931279
  • Zeng, Y., et al.: RT3D: real-time 3-D vehicle detection in LiDAR point cloud for autonomous driving. IEEE Robot. Autom. Lett. 3(4), 3434–3440 (2018). https://doi.org/10.1109/LRA.2018.2852843
  • Zhou, L., Deng, Z.: Lidar and vision-based real-time traffic sign detection and recognition algorithm for intelligent vehicle. In: 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), pp. 578–583 (2014). https://doi.org/10.1109/ITSC.2014.6957752