Affiliation:
1. Stanford University, Stanford, CA, USA
2. Georgia Institute of Technology, Atlanta, GA, USA
3. University of Pennsylvania, Philadelphia, PA, USA
Abstract
Algorithm audits are powerful tools for studying black-box systems without direct knowledge of their inner workings. While very effective in examining technical components, the method stops short of a sociotechnical frame, which would also consider users themselves as an integral and dynamic part of the system. Addressing this limitation, we propose the concept of sociotechnical auditing: auditing methods that evaluate algorithmic systems at the sociotechnical level, focusing on the interplay between algorithms and users as each impacts the other. Just as algorithm audits probe an algorithm with varied inputs and observe outputs, a sociotechnical audit (STA) additionally probes users, exposing them to different algorithmic behavior and measuring their resulting attitudes and behaviors. As an example of this method, we develop Intervenr, a platform for conducting browser-based, longitudinal sociotechnical audits with consenting, compensated participants. Intervenr investigates the algorithmic content users encounter online, and also coordinates systematic client-side interventions to understand how users change in response. As a case study, we deploy Intervenr in a two-week sociotechnical audit of online advertising (N = 244) to investigate the central premise that personalized ad targeting is more effective on users. In the first week, we observe and collect all browser ads delivered to users, and in the second, we deploy an ablation-style intervention that disrupts normal targeting by randomly pairing participants and swapping all their ads. We collect user-oriented metrics (self-reported ad interest and feeling of representation) and advertiser-oriented metrics (ad views, clicks, and recognition) throughout, along with a total of over 500,000 ads. Our STA finds that targeted ads indeed perform better with users, but also that users begin to acclimate to different ads in only a week, casting doubt on the primacy of personalized ad targeting given the impact of repeated exposure. In comparison with other evaluation methods that only study technical components, or only experiment on users, sociotechnical audits evaluate sociotechnical systems through the interplay of their technical and human components.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Human-Computer Interaction,Social Sciences (miscellaneous)
Reference74 articles.
1. Discrimination through Optimization
2. Ad Delivery Algorithms
3. Julia Angwin Jeff Larson Surya Mattu and Lauren Kirchner. 2016. Machine bias: There's software used across the country to predict future criminals. And it's biased against blacks. ProPublica May 23. Julia Angwin Jeff Larson Surya Mattu and Lauren Kirchner. 2016. Machine bias: There's software used across the country to predict future criminals. And it's biased against blacks. ProPublica May 23.
4. Julia Angwin and Terry Parris Jr. 2016. Facebook Lets Advertisers Exclude Users by Race. ProPublica ( 2016 ). https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Julia Angwin and Terry Parris Jr. 2016. Facebook Lets Advertisers Exclude Users by Race. ProPublica (2016). https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race
5. Native Advertising in Online News: Trade-Offs Among Clicks, Brand Recognition, and Website Trustworthiness
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Computational strategic communication in a data-driven world;Public Relations Review;2024-08
2. Improving Group Fairness Assessments with Proxies;ACM Journal on Responsible Computing;2024-07-24
3. Youth as Peer Auditors: Engaging Teenagers with Algorithm Auditing of Machine Learning Applications;Proceedings of the 23rd Annual ACM Interaction Design and Children Conference;2024-06-17
4. Auditing for Racial Discrimination in the Delivery of Education Ads;The 2024 ACM Conference on Fairness, Accountability, and Transparency;2024-06-03
5. Human-Centered Evaluation and Auditing of Language Models;Extended Abstracts of the CHI Conference on Human Factors in Computing Systems;2024-05-11