Abstract
AbstractI aim to illustrate how the recommender systems of digital platforms create a particularly problematic kind of vulnerability in their users. Specifically, through theories of scaffolded cognition and scaffolded affectivity, I argue that a digital platform’s recommender system is a cognitive and affective artifact that fulfills different functions for the platform’s users and its designers. While it acts as a content provider and facilitator of cognitive, affective and decision-making processes for users, it also provides a continuous and detailed amount of information to platform designers regarding users’ cognitive and affective processes. This dynamic, I argue, engenders a kind of vulnerability in platform users, structuring a power imbalance between designers and users. This occurs because the recommender system can not only gather data on users’ cognitive and affective processes, but also affects them in an unprecedentedly economic and capillary manner. By examining one instance of ethically problematic practice from Facebook, I specifically argue that rather than being a tool for manipulating or exploiting people, digital platforms, especially by their underlying recommender systems, can single out and tamper with specific cognitive and affective processes as a tool specifically designed for mind invasion. I conclude by reflecting how the understanding of such AI systems as tools for mind invasion highlights some merits and shortcomings of the AI Act with regards to the protection of vulnerable people.
Publisher
Springer Science and Business Media LLC
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献