Sentiment Analysis and Classification of Hotel Opinions in Twitter With the Transformer Architecture

  1. Sergio Arroni 1
  2. Yeray Galán 1
  3. Xiomarah Guzmán-Guzmán 1
  4. Edward Rolando NúñezValdez 1
  5. Alberto Gómez 1
  1. 1 Universidad de Oviedo
    info

    Universidad de Oviedo

    Oviedo, España

    ROR https://ror.org/006gksa02

Revista:
IJIMAI

ISSN: 1989-1660

Año de publicación: 2023

Título del ejemplar: Special Issue on AI-driven Algorithms and Applications in the Dynamic and Evolving Environments

Volumen: 8

Número: 1

Páginas: 53-63

Tipo: Artículo

DOI: 10.9781/IJIMAI.2023.02.005 DIALNET GOOGLE SCHOLAR lock_openDialnet editor

Otras publicaciones en: IJIMAI

Resumen

Sentiment analysis is of great importance to parties who are interested is analyzing the public opinion in social networks. In recent years, deep learning, and particularly, the attention-based architecture, has taken over the field, to the point where most research in Natural Language Processing (NLP) has been shifted towards the development of bigger and bigger attention-based transformer models. However, those models are developed to be all-purpose NLP models, so for a concrete smaller problem, a reduced and specifically studied model can perform better. We propose a simpler attention-based model that makes use of the transformer architecture to predict the sentiment expressed in tweets about hotels in Las Vegas. With their relative predicted performance, we compare the similarity of our ranking to the actual ranking in TripAdvisor to those obtained by more rudimentary sentiment analysis approaches, outperforming them with a 0.64121 Spearman correlation coefficient. We also compare our performance to DistilBERT, obtaining faster and more accurate results and proving that a model designed for a particular problem can perform better than models with several millions of trainable parameters.

Referencias bibliográficas

  • K. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Biological Cybernetics, vol. 36, no. 4, pp. 193–202, Apr. 1980, doi: 10.1007/ BF00344251.
  • D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986, doi: 10.1038/323533a0.
  • D. W. Otter, J. R. Medina, and J. K. Kalita, “A Survey of the Usages of Deep Learning for Natural Language Processing,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 2, pp. 604–624, Feb. 2021, doi: 10.1109/TNNLS.2020.2979670.
  • A. Vaswani et al., “Attention is All you Need,” Advances in Neural Information Processing Systems, vol. 30, 2017.
  • J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, vol. 1, pp. 4171–4186, Oct. 2018.
  • V. Sanh, L. Debut, J. Chaumond, and T. Wolf, “DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter,” arXiv preprint arXiv:1910.01108, Oct. 2019.
  • S. Smith et al., “Using DeepSpeed and Megatron to Train MegatronTuring NLG 530B, A Large-Scale Generative Language Model,” arXiv preprint arXiv:2201.11990, 2022.
  • K. Philander and Y. Y. Zhong, “Twitter sentiment analysis: Capturing sentiment from integrated resort tweets,” International Journal of Hospitality Management, vol. 55, pp. 16–24, May 2016, doi: 10.1016/J. IJHM.2016.02.001.
  • S. Barke, R. Kunkel, N. Polikarpova, E. Meinhardt, E. Baković, and L. Bergen, “Constraint-based Learning of Phonological Processes,” 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference, pp. 6176–6186, 2019, doi: 10.18653/V1/D19-1639.
  • O. Güngör, T. Güngör, and S. Uskudarli, “EXSEQREG: Explaining sequence-based NLP tasks with regions with a case study using morphological features for named entity recognition,” PLoS One, vol. 15, no. 12, Dec. 2020, doi: 10.1371/journal.pone.0244179.
  • E. M. Ponti, A. Korhonen, R. Reichart, and I. Vulić, “Isomorphic transfer of syntactic structures in cross-lingual NLP,” 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), vol. 1, pp. 1531–1542, 2018, doi: 10.18653/V1/P18-1142.
  • C. Hutto, E. G.-P. of the international A. conference on, and undefined 2014, “Vader: A parsimonious rule-based model for sentiment analysis of social media text,” Proceedings of the international AAAI conference on web and social media, vol. 8, no. 1, pp. 216–225, 2014
  • P. Chikersal, S. Poria and E. Cambria, “SeNTU: Sentiment Analysis of Tweets by Combining a Rule-based Classifier with Supervised Learning,” Proceedings of the 9th international workshop on semantic evaluation, pp. 647–651, 2015.
  • F. Wunderlich and D. Memmert, “Innovative Approaches in Sports Science—Lexicon-Based Sentiment Analysis as a Tool to Analyze Sports Related Twitter Communication,” Applied Sciences, vol. 10, no. 2, p. 431, Jan. 2020, doi: 10.3390/APP10020431.
  • S. Lai, L. Xu, K. Liu and J. Zhao, “Recurrent convolutional neural networks for text classification,” Proceedings of the AAAI conference on artificial intelligence, vol. 29, no. 1, 2015.
  • H. Kim and Y. S. Jeong, “Sentiment Classification Using Convolutional Neural Networks,” Applied Sciences, vol. 9, no. 11, p. 2347, Jun. 2019, doi: 10.3390/APP9112347.
  • X. Fang and J. Zhan, “Sentiment analysis using product review data,” Journal of Big Data, vol. 2, no. 1, pp. 1–14, Dec. 2015, doi: 10.1186/S40537- 015-0015-2/FIGURES/9.
  • M. Imran, P. Mitra, and C. Castillo, “Twitter as a Lifeline: Humanannotated Twitter Corpora for NLP of Crisis-related Messages,” Proceedings of the 10th International Conference on Language Resources and Evaluation, pp. 1638–1643, May 2016, doi: 10.48550/arxiv.1605.05894.
  • X. Liu, H. Shin, and A. C. Burns, “Examining the impact of luxury brand’s social media marketing on customer engagement: Using big data analytics and natural language processing,” Journal of Business Research, vol. 125, pp. 815–826, Mar. 2021, doi: 10.1016/J.JBUSRES.2019.04.042.
  • F. Z. Xing, E. Cambria, and R. E. Welsch, “Natural language based financial forecasting: a survey,” Artificial Intelligence Review, vol. 50, no. 1, pp. 49–73, Oct. 2017, doi: 10.1007/S10462-017-9588-9.
  • M. G. Huddar, S. S. Sannakki, and V. S. Rajpurohit, “Attention-based multi-modal sentiment analysis and emotion detection in conversation using RNN,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 6, no. 6, pp. 112–121, 2021, doi: 10.9781/ijimai.2020.07.004.
  • P. Dcunha, “Aspect Based Sentiment Analysis and Feedback Ratings using Natural Language Processing on European Hotels,” Doctoral thesis. Dublin, National College of Ireland, 2019.
  • T. Ghorpade and L. Ragha, “Featured based sentiment classification for hotel reviews using NLP and Bayesian classification,” Proceedings - 2012 International Conference on Communication, Information and Computing Technology, 2012, doi: 10.1109/ICCICT.2012.6398136.
  • B.-Ş. Posedaru, T.-M. Georgescu, and F.-V. Pantelimon, “Natural Learning Processing based on Machine Learning Model for automatic analysis of Online Reviews related to Hotels and Resorts,” Database Systems Journal, vol. 11, no. 1, pp. 86–105, 2020.
  • W. Medhat, A. Hassan, and H. Korashy, “Sentiment analysis algorithms and applications: A survey,” Ain Shams Engineering Journal, vol. 5, no. 4, pp. 1093–1113, Dec. 2014, doi: 10.1016/J.ASEJ.2014.04.011.
  • L. C. Yu, J. L. Wu, P. C. Chang, and H. S. Chu, “Using a contextual entropy model to expand emotion words and their intensity for the sentiment classification of stock market news,” Knowledge-Based Systems, vol. 41, pp. 89–97, Mar. 2013, doi: 10.1016/J.KNOSYS.2013.01.001.
  • M. Hagenau, M. Liebmann, and D. Neumann, “Automated news reading: Stock price prediction based on financial news using context-capturing features,” Decision Support Systems, vol. 55, no. 3, pp. 685–697, Jun. 2013, doi: 10.1016/J.DSS.2013.02.006.
  • I. Maks and P. Vossen, “A lexicon model for deep sentiment analysis and opinion mining applications,” Decision Support System, vol. 53, no. 4, pp. 680–688, Nov. 2012, doi: 10.1016/J.DSS.2012.05.025.
  • J. Wang et al., “Systematic Evaluation of Research Progress on Natural Language Processing in Medicine Over the Past 20 Years: Bibliometric Study on PubMed,” Journal of Medical Internet Research, vol. 22, no. 1, Jan. 2020, doi: 10.2196/16816.
  • A. Alsudais, G. Leroy, and A. Corso, “We know where you are tweeting from: Assigning a type of place to tweets using natural language processing and random forests,” Proceedings - 2014 IEEE International Congress on Big Data, pp. 594–600, Sep. 2014, doi: 10.1109/BIGDATA. CONGRESS.2014.91.
  • Y. Goldberg and M. E. Ben, “splitSVM: Fast, Space-Efficient, nonHeuristic, Polynomial Kernel Computation for NLP Applications,” Association for Computational Linguistics, pp. 237–240, Jun. 2008.
  • C. Bartz, T. Herold, H. Yang, and C. Meinel, “Language Identification Using Deep Convolutional Recurrent Neural Networks,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10639, pp. 880–889, 2017, doi: 10.1007/978-3-319-70136-3_93.
  • Y. LeCun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and applications in vision,” 2010 IEEE International Symposium on Circuits and Systems: Nano-Bio Circuit Fabrics and Systems, pp. 253–256, doi: 10.1109/ ISCAS.2010.5537907.
  • A. Conneau, H. Schwenk, Y. le Cun, and L. Lo¨ıc Barrault, “Very Deep Convolutional Networks for Text Classification,” Nature, pp. 1–11, Jun. 2016, doi: 10.48550/arxiv.1606.01781.
  • S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, Nov. 1997, doi: 10.1162/ NECO.1997.9.8.1735.
  • T. Wang, P. Chen, K. Amaral, and J. Qiang, “An Experimental Study of LSTM Encoder-Decoder Model for Text Simplification,” arXiv preprint arXiv:1609.03663, Sep. 2016.
  • K. Cho, B. van Merriënboer, D. Bahdanau, and Y. Bengio, “On the Properties of Neural Machine Translation: Encoder-Decoder Approaches,” 8th Workshop on Syntax, Semantics and Structure in Statistical Translation, pp. 103–111, Sep. 2014, doi: 10.3115/v1/w14-4012.
  • Z. Shaheen, G. Wohlgenannt, and E. Filtz, “Large Scale Legal Text Classification Using Transformer Models,” Computer Science ArXiV, vol. abs/2010.12871, 2020.
  • D. Bahdanau, K. H. Cho, and Y. Bengio, “Neural Machine Translation by Jointly Learning to Align and Translate,” 3rd International Conference on Learning Representations, Sep. 2015, doi: 10.48550/arxiv.1409.0473.
  • T. Shao, Y. Guo, H. Chen, and Z. Hao, “Transformer-Based Neural Network for Answer Selection in Question Answering,” IEEE Access, vol. 7, pp. 26146–26156, 2019, doi: 10.1109/ACCESS.2019.2900753.
  • U. Khandelwal, K. Clark, D. Jurafsky, and Ł. Kaiser, “Sample Efficient Text Summarization Using a Single Pre-Trained Transformer,” arXiv preprint arXiv:1905.08836, 2019.
  • T. Wang, X. Wan, and H. Jin, “Amr-to-text generation with graph transformer,” Transactions of the Association for Computational Linguistics, vol. 8, pp. 19–33, Jan. 2020, doi: 10.1162/TACL_A_00297/43537/AMR-TOTEXT-GENERATION-WITH-GRAPH-TRANSFORMER.
  • S. Hochreiter, “The vanishing gradient problem during learning recurrent neural nets and problem solutions,” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 6, no. 2, pp. 107–116, Apr. 1998, doi: 10.1142/S0218488598000094.
  • A. Gulati et al., “Conformer: Convolution-augmented Transformer for Speech Recognition,” Proceedings of the Annual Conference of the International Speech Communication Association, vol. 2020-October, pp. 5036–5040, 2020, doi: 10.21437/Interspeech.2020-3015.
  • A. Rives et al., “Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences,” Proceedings of the National Academy of Sciences of the United States of America, vol. 118, no. 15, Apr. 2021, doi: 10.1073/PNAS.2016239118/SUPPL_FILE/ PNAS.2016239118.SAPP.PDF.
  • T. Lin, Y. Wang, X. Liu, and X. Qiu, “A Survey of Transformers,” OpenAI, Jun. 2021.
  • T. Brown et al., “Language Models are Few-Shot Learners,” in Advances in Neural Information Processing Systems, 2020, vol. 33, pp. 1877–1901.
  • “Hotel Reviews - dataset by datafiniti | data.world.” https://data.world/ datafiniti/hotel-reviews (accessed Feb. 11, 2022).
  • “amazon_reviews_multi · Datasets at Hugging Face.” https://huggingface. co/datasets/amazon_reviews_multi (accessed Feb. 11, 2022).
  • “Hotel Reviews - dataset by datafiniti.” https://data.world/datafiniti/ hotel-reviews (accessed Mar. 23, 2022).
  • A. Joshi, S. Kale, S. Chandel, and D. K. Pal, “Likert Scale: Explored and Explained,” British Journal of Applied Science & Technology, vol. 7, no. 4, p. 396, 2015, doi: 10.9734/BJAST/2015/14975.
  • François Chollet, “Keras: the Python deep learning API,” Astrophysics Source Code Library, 2018.
  • A. Belhadi, Y. Djenouri, J. C. W. Lin, and A. Cano, “A data-driven approach for twitter hashtag recommendation,” IEEE Access, vol. 8, pp. 79182–79191, 2020, doi: 10.1109/ACCESS.2020.2990799.
  • K. Philander and Y. Y. Zhong, “Twitter sentiment analysis: Capturing sentiment from integrated resort tweets,” International Journal of Hospitality Management, vol. 55, pp. 16–24, May 2016, doi: 10.1016/J. IJHM.2016.02.001.
  • “Natural Language Processing with Transformers .” https://www. oreilly.com/library/view/natural-language-processing/9781098103231/ (accessed Feb. 11, 2022).
  • A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improving Language Understanding by Generative Pre-Training,” OpenAI, 2018.
  • A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language Models are Unsupervised Multitask Learners,” OpenAI, vol. 1, no. 8, p. 9, 2019.