Author:
Merugu Bhuvana Naga Priya ,Godavarthi Srujana ,Angara Navya Sri Alekhya ,Yamuna Mundru ,Manas Kumar Yogi
Abstract
The importance of counting for fairness has increased significantly in the design and engineering of those systems because of the rapid rise and widespread use of Artificial Intelligence (AI) systems and its applications in our daily lives. It is crucial to guarantee that the opinions formed by AI systems do not represent discrimination against particular groups or populations because these systems have the potential to be employed in a variety of sensitive contexts to form significant and life-changing judgments. Recent advances in traditional machine learning and deep learning have addressed these issues in a variety of subfields. Scientists are striving to overcome the biases that these programs may possess because of the industrialization of these systems and are getting familiar with them. This study looks into several practical systems that had exhibited biases in a wide variety of ways, and compiles a list of various biases’ possible sources. Then, in order to eliminate the bias previously existing in AI technologies, a hierarchy for fairness characteristics has been created. Additionally, numerous AI fields and sub domains are studied to highlight what academics have noticed regarding improper conclusions in the most cutting-edge techniques and ways they have attempted to remedy them. To lessen the issue of bias in AI systems, multiple potential future avenues and results are currently present. By examining the current research in their respective domains, it is hoped that this survey may inspire scholars to amend these problems promptly.
Publisher
Inventive Research Organization
Subject
General Earth and Planetary Sciences,General Environmental Science