Image Classification Methods Applied in Immersive Environments for Fine Motor Skills Training in Early Education

  1. Paulo Alonso Gaona-García
  2. Carlos Enrique Montenegro-Marin
  3. de Íñigo Sarría Martínez Mendivil
  4. Andrés Ovidio Restrepo Rodríguez
  5. Maddyzeth Ariza Riaño
Revista:
IJIMAI

ISSN: 1989-1660

Año de publicación: 2019

Volumen: 5

Número: 7

Páginas: 151-158

Tipo: Artículo

DOI: 10.9781/IJIMAI.2019.10.004 DIALNET GOOGLE SCHOLAR lock_openAcceso abierto editor

Otras publicaciones en: IJIMAI

Resumen

Fine motor skills allow to carry out the execution of crucial tasks in people's daily lives, increasing their independence and self-esteem. Among the alternatives for working these skills, immersive environments are found providing a set of elements arranged to have a haptic experience through gestural control devices. However, generally, these environments do not have a mechanism for evaluation and feedback of the exercise performed, which does not easily identify the objective's fulfillment. For this reason, this study aims to carry out a comparison of image recognition methods such as Convolutional Neural Network (CNN), K-Nearest Neighbor (K-NN), Support Vector Machine (SVM) and Decision Tree (DT), for the purpose of performing an evaluation and feedback of exercises. The assessment of the techniques is carried out using images captured from an immersive environment, calculating metrics such as confusion matrix, cross validation and classification report. As a result of this process, it was obtained that the CNN model has a better supported performance in 82.5% accuracy, showing an increase of 23.5% compared to SVM, 30% compared to K-NN and 25% compared to DT. Finally, it is concluded that in order to implement a method of evaluation and feedback in an immersive environment for academic training in the first school years, a low margin of error must be taken in the percentage of successes of the image recognition technique implemented, to ensure the proper development of these skills considering their great importance in childhood.

Referencias bibliográficas

  • Grissmer, D., Grimm, K. J., Steele, J. S., Aiyer, S. M., and Murrah, W. M. (2010). Fine Motor Skills and Early Comprehension of the World: Two New School Readiness Indicators. Developmental Psychology, 46(5), pp. 1008–1017. doi: 10.1037/a0020104.
  • Case-Smith, J. (2013). Encyclopedia of Autism Spectrum Disorders. New York, United States: Volkmar, Fred R.
  • Perera H, Shiratuddin MF, Wong KW and Fullarton K. (2018) EEG Signal Analysis of Writing and Typing between Adults with Dyslexia and Normal Controls. International Journal of Interactive Multimedia and Artificial Intelligence, 5(1), pp. 62-67. doi: 10.9781/ijimai.2018.04.005.
  • Che Hashim, N., Abd Majid, N. A., Arshad, H. and Khalid Obeidy, W. (2018). User Satisfaction for an Augmented Reality Application to Support Productive Vocabulary Using Speech Recognition. Advances in Multimedia, 2018, pp. 1-10. doi: 10.1155/2018/9753979.
  • Sveistrup, H. (2004). Motor rehabilitation using virtual reality. Journal of NeuroEngineering and Rehabilitation, 1(10), pp. 1-8. doi: 10.1186/1743- 0003-1-10.
  • Garcia Gaona, P. A., Martin Moncunill, D., Gordillo, K. and González Crespo, R. (2016). Navigation and Visualization of Knowledge Organization Systems using Virtual Reality Glasses. IEEE Latin America Transactions, 14(6), pp. 2915-2920. doi: 10.1109/TLA.2016.7555275.
  • Garcia Gaona, P. A., Martin Moncunill, D., Gordillo, K. and Montenegro, C. (2016, june). “Navigation and Visualization of Knowledge Organization Systems using Virtual Reality Glasses: first insights”. Presented in UNESCO-UNIR ICT & Education Latam 2016, Bogotá, Colombia.
  • Rehman, I., Ullah, S. and Raees, M. (2019). Two Hand Gesture Based 3D Navigation in Virtual Environments. International Journal of Interactive Multimedia and Artificial Intelligence, 5(4), pp. 128-140. doi: 10.9781/ijimai.2018.07.001.
  • Medina, M. A., García, C. F. J. and Olguín, M. J. A. (2018). Planning and Allocation of Digital Learning Objects with Augmented Reality to Higher Education Students According to the VARK Model. International Journal of Interactive Multimedia and Artificial Intelligence, 5(2), pp. 53-57. doi: 10.9781/ijimai.2018.02.005
  • Pasaréti, O, Hajdú, H., Matuszka, T., Jámbori, A., Molnár, I. and TurcsányiSzabó, M. (2012). Augmented Reality in education. pp. 1-10.
  • Chen, C.-H., Huang, C.-Y. and Chou, Y.-Y. (2019). Effects of augmented reality-based multidimensional concept maps on students’ learning achievement, motivation and acceptance. Universal Access in the Information Society, 18(2), pp. 257–268. doi: 10.1007/s10209-017- 0595-z.
  • Gutiérrez de Ravé, E., Jiménez-Hornero, F. J., Ariza-Villaverde, A. B. and Taguas-Ruiz, J. (2016). DiedricAR: a mobile augmented reality system designed for the ubiquitous descriptive geometry learning. Multimedia Tools and Applications, 75(16), pp. 9641–9663. doi: 10.1007/s11042-016- 3384-4.
  • Zhu, Y., Yang, X. and Wang, S. J. (2017). Augmented Reality Meets Tangibility: A New Approach for Early Childhood Education. EAI Endorsed Transactions on Creative Technologies, 4(11), pp. 1-8. doi: 10.4108/eai.5-9-2017.153059.
  • Cascales, A., Laguna, I., Pérez-López, D., Perona, P. and Contero, M. (2013, July). “An Experience on Natural Sciences Augmented Reality Contents for Preschoolers”. Presented in International Conference on Virtual, Augmented and Mixed Reality VAMR 2013: Virtual, Augmented and Mixed Reality. Systems and Applications, Las Vegas, NV, USA.
  • Restrepo Rodríguez, A. O., Casas Mateus, D. E., Gaona García, P. A., Montenegro Marín, C. E. and González Crespo, R. (2018). Hyperparameter Optimization for Image Recognitionover an AR-Sandbox Based on Convolutional NeuralNetworks Applying a Previous Phase of Segmentation by Color–Space. Symmetry, 10(743), pp. 1-16. doi: 10.3390/ sym10120743.
  • Adankon, M. and Cheriet, M. (2009). Encyclopedia of Biometrics. Boston, MA: Li S.Z., Jain A.
  • Ault, Aaron & Zhong, Xuan & Coyle, Edward. (2005). K-nearestneighbor analysis of received signal strength distance estimation across environments.
  • Ketkar, N. (2017). Deep Learning with Python. Apress, Berkeley, CA: Springer, pp. 63-78.
  • Ganten, D. (2006). Encyclopedic Reference of Genomics and Proteomics in Molecular Medicine. Berlin, Heidelberg: Springer.
  • Lin, P., Li, X. L., Chen, Y. M. and He, Y. (2018). A Deep Convolutional Neural Network Architecture for Boosting Image Discrimination Accuracy of Rice Species. Food and Bioprocess Technology, 11(4), pp. 765–773. doi: 10.1007/s11947-017-2050-9.
  • Wang, Y. and Xin, M. (2019). Research on image classification model based on deep convolution neural network. EURASIP Journal on Image and Video Processing, 2019(40). doi: 10.1186/s13640-019-0417-8.
  • Dyrmann, M., Karstoft, H. and SkovMidtiby, H. Plant species classification using deep convolutional neural network. Biosystems Engineering, (151), pp. 72-80. doi: 10.1016/j.biosystemseng.2016.08.024.
  • Ariza Riaño, M., Restrepo Rodríguez, A. O., Gaona García, P. and Montenegro Marín, C. (2018, november). “3D Immersive Environments to Support Training Activities Through Fine Motor Therapy”. Presented in 11th annual International Conference of Education, Research and Innovation, Seville, Spain. doi: 10.21125/iceri.2018.1675.
  • Allix, K., Bissyandé, T. F., Jérome, Q., Klein, J., State, R. and Le Traon, Y. (2016). Empirical assessment of machine learning-based malware detectors for Android. Empirical Software Engineering, 21(1), pp. 183– 211. doi: 10.1007/s10664-014-9352-6.
  • Bisoi, R., Dash, P. K. and Das, P. P. (2018). Short-term electricity price forecasting and classification in smart grids using optimized multikernel extreme learning machine. Neural Computing and Applications, pp. 1-24. doi: 10.1007/s00521-018-3652-5.
  • Sladojevic, S., Arsenovic, M., Anderla, A., Culibrk, D. and Stefanovic, D. (2016). Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification. Computational Intelligence and Neuroscience, 2016, pp. 1-11. doi: 10.1155/2016/3289801.
  • Tung, J. Y., Lulic, T., Gonzalez, D. A., Tran, J., Dickerson, C. R. and Roy, E. A. (2015). Evaluation of a portable markerless finger position capture device: accuracy of the Leap Motion controller in healthy adults. Physiological measurement, 36(5), pp. 1025-1035. doi: 10.1088/0967- 3334/36/5/1025.
  • Ultrahaptics Ltd. (2019). Leap Motion. In: https://www.leapmotion.com/.