Author:
Wang Kai,Huang Jiahui,Liu Yuqi,Cao Bin,Fan Jing
Publisher
Springer International Publishing
Reference31 articles.
1. Ahmed, H., Traore, I., Saad, S.: Detecting opinion spams and fake news using text classification. Secur. Priv. 1(1), e9 (2018)
2. Beltagy, I., Peters, M.E., Cohan, A.: Longformer: the long-document transformer. arXiv preprint arXiv:2004.05150 (2020)
3. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
4. Child, R., Gray, S., Radford, A., Sutskever, I.: Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509 (2019)
5. Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-xl: attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860 (2019)
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Effectiveness of Feature Selection in Text Summarization;2023 Eleventh International Conference on Intelligent Computing and Information Systems (ICICIS);2023-11-21
2. Simple Hack for Transformers Against Heavy Long-Text Classification on a Time- and Memory-Limited GPU Service;2023 10th International Conference on Advanced Informatics: Concept, Theory and Application (ICAICTA);2023-10-07