Abstract
AbstractWith the increasing adoption of artificial intelligence (AI) technologies in the news industry, media organizations have begun publishing guidelines that aim to promote the responsible, ethical, and unbiased implementation of AI-based technologies. These guidelines are expected to serve journalists and media workers by establishing best practices and a framework that helps them navigate ever-evolving AI tools. Drawing on institutional theory and digital inequality concepts, this study analyzes 37 AI guidelines for media purposes in 17 countries. Our analysis reveals key thematic areas, such as transparency, accountability, fairness, privacy, and the preservation of journalistic values. Results highlight shared principles and best practices that emerge from these guidelines, including the importance of human oversight, explainability of AI systems, disclosure of automated content, and protection of user data. However, the geographical distribution of these guidelines, highlighting the dominance of Western nations, particularly North America and Europe, can further ongoing concerns about power asymmetries in AI adoption and consequently isomorphism outside these regions. Our results may serve as a resource for news organizations, policymakers, and stakeholders looking to navigate the complex AI development toward creating a more inclusive and equitable digital future for the media industry worldwide.
Funder
HORIZON EUROPE Framework Programme
Universiteit van Amsterdam
Macquarie University
Publisher
Springer Science and Business Media LLC