1. Al-Rakhami, M., Alamri, A.: Lies kill, facts save: detecting COVID-19 misinformation in twitter. IEEE Access 8, 155961–155970 (2020). https://doi.org/10.1109/ACCESS.2020.3019600
2. Banda, J.M., et al.: A large-scale COVID-19 twitter chatter dataset for open scientific research - an international collaboration. CoRR abs/2004.03688 (2020). https://arxiv.org/abs/2004.03688
3. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)
4. Cui, L., Lee, D.: Coaid: COVID-19 healthcare misinformation dataset. CoRR abs/2006.00885 (2020). https://arxiv.org/abs/2006.00885
5. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Burstein, J., Doran, C., Solorio, T. (eds.) Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2–7, 2019, Volume 1 (Long and Short Papers). pp. 4171–4186. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/n19-1423