Transparency-Check: An Instrument for the Study and Design of Transparency in AI-based Personalization Systems

Author:

Schelenz Laura1,Segal Avi2,Adelio Oduma3,Gal Kobi4

Affiliation:

1. International Center for Ethics in the Sciences and Humanities, University of Tuebingen, Germany

2. Ben-Gurion University of the Negev, Be’er Scheva, Israel

3. University of Oxford, UK

4. Ben-Gurion University of the Negev, Be’er Scheva, Israel, and The University of Edinburgh, UK

Abstract

As AI-based systems become commonplace in our daily lives, they need to provide understandable information to their users about how they collect, process, and output information that concerns them. The importance of such transparency practices has gained significance due to recent ethical guidelines and regulation, as well as research suggesting a positive relationship between the transparency of AI-based systems and users’ satisfaction. This paper provides a new tool for the design and study of transparency in AI-based systems that use personalization. The tool, called Transparency-Check, is based on a checklist of questions about transparency in four areas of a system: input (data collection), processing (algorithmic models), output (personalized recommendations) and user control (user feedback mechanisms to adjust elements of the system). Transparency-Check can be used by researchers, designers, and end users of computer systems. To demonstrate the usefulness of Transparency-Check from a researcher perspective, we collected the responses of 108 student participants who used the transparency checklist to rate five popular real-world systems (Amazon, Facebook, Netflix, Spotify, and YouTube). Based on users’ subjective evaluations, the systems showed low compliance with transparency standards, with some nuances about individual categories (specifically data collection, processing, user control). We use these results to compile design recommendations for improving transparency in AI-based systems, such as integrating information about the system’s behavior during the user’s interactions with it.

Publisher

Association for Computing Machinery (ACM)

Reference65 articles.

1. 27 April 2016. General Data Protection Regulation (REGULATION (EU) 2016/679). 27 April 2016. General Data Protection Regulation (REGULATION (EU) 2016/679).

2. Himan Abdollahpouri . 2019 . Popularity Bias in Ranking and Recommendation . In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Vincent Conitzer, Gillian Hadfield, and Shannon Vallor (Eds.). ACM , New York, NY, USA, 529–530. https://doi.org/10.1145/3306618.3314309 10.1145/3306618.3314309 Himan Abdollahpouri. 2019. Popularity Bias in Ranking and Recommendation. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Vincent Conitzer, Gillian Hadfield, and Shannon Vallor (Eds.). ACM, New York, NY, USA, 529–530. https://doi.org/10.1145/3306618.3314309

3. Multistakeholder recommendation: Survey and research directions

4. AI Ethics Impact Group. [n. d.]. From Principles to Practice: An Interdisciplinary Framework to Operationalise AI Ethics. https://www.ai-ethics-impact.org/resource/blob/1961130/c6db9894ee73aefa489d6249f5ee2b9f/aieig---report---download-hb-data.pdf AI Ethics Impact Group. [n. d.]. From Principles to Practice: An Interdisciplinary Framework to Operationalise AI Ethics. https://www.ai-ethics-impact.org/resource/blob/1961130/c6db9894ee73aefa489d6249f5ee2b9f/aieig---report---download-hb-data.pdf

5. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3