Abstract
Abstract
The debate on the ethical challenges of artificial intelligence (AI) is nothing new. Researchers and commentators have highlighted the deficiencies of AI technology regarding visible minorities, women, youth, seniors and indigenous people. Currently, there are several ethical guidelines and recommendations for AI. These guidelines provide ethical principles and humancentred values to guide the creation of responsible AI. Since these guidelines are non-binding, it has no significant effect. It is time to harness initiatives to regulate AI globally and incorporate human rights and ethical standards in AI creation. The government need to intervene, and discriminated groups should lend their voice to shape AI regulation to suit their circumstances. This study highlights the discriminatory and technological risks suffered by minority/marginalised groups owing to AI’s ethical dilemma. As a result, it recommends the guarded deployment of AI vigilantism to regulate the use of AI technologies and prevent harm arising from AI systems’ operations. The appointed AI vigilantes will comprise mainly persons/groups with an increased risk of their rights being disproportionately impacted by AI. It is a well-intentioned group that will work with the government to avoid abuse of powers.
Publisher
Oxford University Press (OUP)
Subject
Law,Library and Information Sciences
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献