Affiliation:
1. Adamas University, India
Abstract
This chapter is trying to answer a big question: Should we create artificial intelligence (AI) that deserves moral consideration, like humans? To figure this out, the authors look at different ideas about what AI is and what it should be. They use two main theories about ethics (how we decide right and wrong) to see if AI should be treated morally. One theory they use says that if AI fits the definition of intelligence, it should be treated morally, no matter which ethical theory you follow. The other theory they use is called “capability theory,” combined with the definition of AI. This leads to the conclusion that we shouldn't develop AI further if we believe it deserves moral consideration. So, the chapter explores whether AI should be treated morally, and it suggests that if so, we might need to stop developing AI.