Affiliation:
1. Johns Hopkins University, USA
2. Department of Defense, USA
3. Lawrence Livermore National Laboratory, USA
Abstract
Abstract
Automatically disentangling an author’s style from the content of their writing is a longstanding and possibly insurmountable problem in computational linguistics. At the same time, the availability of large text corpora furnished with author labels has recently enabled learning authorship representations in a purely data-driven manner for authorship attribution, a task that ostensibly depends to a greater extent on encoding writing style than encoding content. However, success on this surrogate task does not ensure that such representations capture writing style since authorship could also be correlated with other latent variables, such as topic. In an effort to better understand the nature of the information these representations convey, and specifically to validate the hypothesis that they chiefly encode writing style, we systematically probe these representations through a series of targeted experiments. The results of these experiments suggest that representations learned for the surrogate authorship prediction task are indeed sensitive to writing style. As a consequence, authorship representations may be expected to be robust to certain kinds of data shift, such as topic drift over time. Additionally, our findings may open the door to downstream applications that require stylistic representations, such as style transfer.
Subject
Artificial Intelligence,Computer Science Applications,Linguistics and Language,Human-Computer Interaction,Communication
Reference31 articles.
1. Learning invariant representations of social media users;Andrews,2019
2. The Pushshift Reddit dataset;Baumgartner,2020
3. Overview of PAN 2020: Authorship verification, celebrity profiling, profiling fake news spreaders on Twitter, and style change detection;Bevendorff,2020
4. Explainable authorship verification in social media via attention-based similarity learning;Boenninghoff,2019
5. Pre-training with whole word masking for Chinese BERT;Cui;IEEE/ACM Transactions on Audio, Speech, and Language Processing,2021
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Contrastive Disentanglement for Authorship Attribution;Companion Proceedings of the ACM Web Conference 2024;2024-05-13
2. Can Authorship Attribution Models Distinguish Speakers in Speech Transcripts?;Transactions of the Association for Computational Linguistics;2024