Author:
Boluki Ali,Pourmostafa Roshan Sharami Javad,Shterionov Dimitar
Publisher
Springer Nature Switzerland
Reference38 articles.
1. Bilal M, Almazroi A (2022) Effectiveness of fine-tuned Bert model in classification of helpful and unhelpful online customer reviews. Electron. Commer. Res. 1–21
2. Boyd RL, Ashokkumar A, Seraj S, Pennebaker JW. The development and psychometric properties of LIWC-22. University of Texas at Austin, Austin, TX
3. Chen C, Yang Y, Zhou J, Li X, Bao F (2018) Cross-domain review helpfulness prediction based on convolutional neural networks with auxiliary domain discriminators. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 2 (Short Papers), pp. 602–607
4. Clark, K., Khandelwal, U., Levy, O., Manning C.D. (2019) What does BERT look at? An analysis of BERT’s attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 276–286
5. Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, É., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 8440–8451