Abstract
AbstractSimion and Kelp offer a prima facie very promising account of trustworthy AI. One benefit of the account is that it elegantly explains trustworthiness in the case of cancer diagnostic AIs, which involve the acquisition by the AI of a representational etiological function. In this brief note, I offer some reasons to think that their account cannot be extended — at least not straightforwardly — beyond such cases (i.e., to cases of AIs with non-representational etiological functions) without incurring the unwanted cost of overpredicting untrustworthiness.
Funder
Arts and Humanities Research Council
Publisher
Springer Science and Business Media LLC
Reference21 articles.
1. Abramson, J., Ahuja, A., Carnevale, F., Georgiev, P., Goldin, A., Hung, A., Landon, J., et al. (2022). Improving multimodal interactive agents with reinforcement learning from human feedback. arXiv. https://doi.org/10.48550/arXiv.2211.11602
2. Adam Carter, J. (2022). Trust and trustworthiness. Philosophy and Phenomenological Research.
3. Adam Carter, J., & Simion, M. (2020). The ethics and epistemology of trust. Internet Encyclopedia of Philosophy.
4. Alfano, M., Fard, E. A., Carter, J. A., Clutton, P., & Klein C. (2020). Technologically scaffolded atypical cognition: The case of YouTube’s recommender system. Synthese, 1–24.
5. Dündar, P., & Ranaivoson, H. (2022). Science by YouTube: ananalysis of YouTube’s recommendations on the climate change issue. Observatorio (OBS*), 16(3).
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献