Abstract
AbstractThe spread of misinformation and fake news raises important problems for our society and for our democracy. From the January 6 attack on the U.S. Capitol to vaccine hesitancy, from suppressing voter turnout to peddling conspiracy theories, we know that these problems are real and need to be taken seriously. While misinformation is not a new problem for democracy, it can spread more quickly and easily because of new media’s design and popularity. Given these problems, it is encouraging that some technology companies are taking steps to reduce the spread of misinformation and fake news on the platforms they manage. Despite this seemingly positive development, some scholars have criticized some interventions designed to combat the spread of misinformation and fake news as paternalistic. For example, a 2019 Facebook intervention called Click-Gap aimed to reduce the amount of low-quality content (including fake news and misinformation) that users see on their NewsFeed. Click-Gap has been criticized as an instance of epistemic paternalism because it was adopted (1) with the goal of improving the epistemic status of its users and (2) irrespective of what the company believed the wishes of its users to be. If interventions like Click-Gap are problematic because paternalistic, those of us interested in the ethics of technology would face a dilemma—either endorse technology companies treating their users paternalistically or endorse their failing to act to combat the spread of misinformation and fake news on their platforms. Both options seem to me to be problematic. While paternalism may sometimes be permissible, I think we should be very hesitant to endorse a paternalistic relationship between technology companies and their users. The relationship does not seem to bear the right sort of structure to one in which paternalism might be appropriate, if it ever is. The second option seems, if anything worse: surely technology companies should not stand by and change nothing about their platforms despite the spread of misinformation and fake news in those spaces. In this paper, I argue that Click-Gap and interventions like it are not paternalistic, contrary to the conclusion of other scholars. Further, I will argue that the focus on paternalism itself is actually a red herring here. While not just any intervention or strategy that purports to reduce fake news and misinformation is permissible, we should want technology companies to take user well-being seriously and be able to take that well-being as a direct reason for action. Their doing so is not paternalistic nor even morally problematic, and it should not be criticized as such.
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences
Reference44 articles.
1. Ahlström, K.: Epistemic paternalism: a defence. Palgrave Macmillan, Houndmills (2013)
2. Allcott, H., Gentzkow, M., Chuan, Yu.: Trends in the diffusion of misinformation on social Media. Res. Politics 6(2), 2053168019848554 (2019). https://doi.org/10.1177/2053168019848554
3. Anderson, K.: Truth, lies, and likes: why human nature makes online misinformation a serious threat (and what we can do about it) student articles. Law Psychol. Rev. 44, 209–244 (2019)
4. Ben-Porath, S.: Paternalism, (school) choice, and opportunity. In: Coons, C., Weber, M. (eds.) Paternalism: theory and practice, pp. 247–65. Cambridge University Press, Cambridge (2013)
5. Brock, D.: Paternalism and promoting the good. In: Sartorius, R.E. (ed.) paternalism, pp. 237–260. University of Minnesota Press, Minneapolis (1983)