A First Prototype of an Emotional Smart Speaker

  1. de la Cal, Enrique 1
  2. Gallucci, Alberto
  3. Villar, Jose Ramón 1
  4. Yoshida, Kaori
  5. Koeppen, Mario
  1. 1 Universidad de Oviedo
    info

    Universidad de Oviedo

    Oviedo, España

    ROR https://ror.org/006gksa02

Actas:
16th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2021)

ISSN: 2194-5357 2194-5365

ISBN: 9783030878689 9783030878696

Año de publicación: 2021

Páginas: 304-313

Tipo: Aportación congreso

DOI: 10.1007/978-3-030-87869-6_29 GOOGLE SCHOLAR lock_openAcceso abierto editor

Resumen

Affective computing comprises the techniques devoted to identify and understand human emotions. However, this topic covers many other subtopics; it can be remarked Speech Emotion Recognition (SER) between them. In the last two decades, we have witnessed the birth and expansion of marketed products like smart voice assistants and their associated autonomous smart speakers by Amazon, Google, and Apple. This work presents the design and implementation of a new Emotional Smart Speaker prototype-based hybridisation of an Amazon Echo Dot device and A Rasberry PI with a low-power SER algorithm built-in. The proposed SER algorithm is based on a Bag of Models method with two base models, an XtraTrees algorithm and a pre-trained Resnet18 Neural Network. The proposal has been validated for four well-known SER datasets: EmoDB, TESS, SAVEE and RAVDSS. And the obtained model outperforms eleven well-known ML methods available in the literature for the studied public datasets.

Referencias bibliográficas

  • Ahsan, M., Kumari, M.: Physical features based speech emotion recognition using predictive classification. Int. J. Comput. Sci. Inf. Technol. 8(2), 63–74 (2016)
  • Akçay, M.B., Oğuz, K.: Speech emotion recognition: emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers. Speech Commun. 116(October 2019), 56–76 (2020)
  • AlexaPI: AlexaPI - Alexa for Rasberry PI - API Python (2017). https://github.com/alexa-pi/AlexaPi/wiki/Audio-setup-&-debugging
  • Amazon Alexa: Official C++ Distribution of Alexa for Rasberry PI. https://developer.amazon.com/en-US/docs/alexa/avs-device-sdk/raspberry-pi-script.html
  • Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
  • Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W., Weiss, B.: A database of German emotional speech. In: 9th European Conference on Speech Communication and Technology, pp. 1517–1520 (2005)
  • Chang, C.C., Lin, C.J.: LIBSVM: a Library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 1–27 (2011)
  • Fiberlogy: 3D Recycled material - Fiberlogy (2021). https://fiberlogy.com/en/fiberlogy-filaments/r-pla/
  • Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Statist. 29(5), 1189–1232 (2001)
  • Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63(1), 3–42 (2006)
  • Haq, S., Jackson, P.J.B.: Speaker-dependent audio-visual emotion recognition. In: Proceedings of International Conference on Auditory-Visual Speech Processing (AVSP 2008), Norwich, UK (2009)
  • Haq, S., Jackson, P.J.B.: Machine Audition: Principles, Algorithms and Systems. chap. Multimodal, pp. 398–423. IGI Global, Hershey PA (2010)
  • Haq, S., Jackson, P., Edge, J.: Audio-visual feature selection and reduction for emotion classification. Expert Syst. Appl. 39, 7420–7431 (2008)
  • Hastie, T., Tibshirani, R., Friedman, J.: Springer Series in Statistics The Elements of Statistical Learning Data Mining, Inference, and Prediction. Tech. rep
  • He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR abs/1512.0 (2015). arXiv:1512.03385
  • Kingma, D.P., Ba, J.L.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings. International Conference on Learning Representations, ICLR (2015)
  • Livingstone, S.R., Russo, F.A.: The Ryerson audio-visual database of emotional speech and song (ravdess): a dynamic, multimodal set of facial and vocal expressions in north American English. PLoS ONE 13(5), e0196391 (2018)
  • Manning, C.D., Raghavan, P., Schuetze, H.: The Bernoulli model. In: Introduction to Information Retrieval, pp. 234–265 (2009)
  • Pichora-Fuller, M.K., Dupuis, K.: Toronto emotional speech set (TESS) (2020)
  • Recognition, P.S.: Python Speech Recognition API r. 3.8.1 (2021). https://pypi.org/project/SpeechRecognition/
  • Sudharsan, B., Corcoran, P., Ali, M.I.: Smart speaker design and implementation with biometric authentication and advanced voice interaction capability. In: CEUR Workshop Proceedings, vol. 2563, pp. 305–316 (2019)
  • Sudharsan, B., Kumar, S.P., Dhakshinamurthy, R.: AI vision: smart speaker design and implementation with object detection custom skill and advanced voice interaction capability. In: Proceedings of the 11th International Conference on Advanced Computing, ICoAC 2019, pp. 97–102 (2019)
  • Van Erp, M., Vuurpijl, L., Schomaker, L.: An overview and comparison of voting methods for pattern recognition, pp. 195–200. Proceedings - International Workshop on Frontiers in Handwriting Recognition, IWFHR pp (2002)
  • Zhu, J., Zou, H., Rosset, S., Hastie, T.: Multi-class AdaBoost *. Tech. rep. (2009)