The paper traces the development of the discussion around ethical issues in artificial intelligence, and considers the way in which humans have affected the knowledge bases used in machine learning. The phenomenon of bias or discrimination in machine ethics is seen as inherited from humans, either through the use of biased data or through the semantics inherent in intellectually-built tools sourced by intelligent agents. The kind of biases observed in AI are compared with those identified in the field of knowledge organization, using religious adherents as an example of a community potentially marginalized by bias. A practical demonstration is given of apparent religious prejudice inherited from source material in a large database deployed widely in computational linguistics and automatic indexing. Methods to address the problem of bias are discussed, including the modelling of the moral process on neuroscientific understanding of brain function. The question is posed whether it is possible to model religious belief in a similar way, so that robots of the future may have both an ethical and a religious sense and themselves address the problem of prejudice.