The rapid competitive economy of machine learning development: a discussion on the social risks and benefits
-
Published:2023-03-27
Issue:
Volume:
Page:
-
ISSN:2730-5953
-
Container-title:AI and Ethics
-
language:en
-
Short-container-title:AI Ethics
Abstract
AbstractResearch in artificial intelligence (AI) has started in the twentieth century but it was not until 2012 that modern models of artificial neural networks aided the machine learning process considerably so that in the past ten years, both computer vision as well as natural language processing have become increasingly better. AI developments have accelerated rapidly, leaving open questions about the potential benefits and risks of these dynamics and how the latter might be managed. This paper discusses three major risks, all lying in the domain of AI safety engineering: the problem of AI alignment, the problem of AI abuse, and the problem of information control. The discussion goes through a short history of AI development, briefly touching on the benefits and risks, and eventually making the case that the risks might potentially be mitigated through strong collaborations and awareness concerning trustworthy AI. Implications for the (digital) humanities are discussed.
Funder
Kalaidos University of Applied Sciences
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences
Reference142 articles.
1. Adiwardana, D.: Towards a conversational agent that can chat about…anything [google brain research report]. Google AI Blog. http://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html (2020). Accessed 26 May 2022 2. Afshar, M., Sharma, B., Dligach, D., Oguss, M., Brown, R., Chhabra, N., Thompson, H.M., Markossian, T., Joyce, C., Churpek, M.M., Karnik, N.S.: Development and multimodal validation of a substance misuse algorithm for referral to treatment using artificial intelligence (SMART-AI): a retrospective deep learning study. Lancet Digit. Health 4(6), e426–e435 (2022). https://doi.org/10.1016/S2589-7500(22)00041-3 3. Alayrac, J.-B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., Ring, R., Rutherford, E., Cabi, S., Han, T., Gong, Z., Samangooei, S., Monteiro, M., Menick, J., Borgeaud, S., et al.: Flamingo: a visual language model for few-shot learning (arXiv:2204.14198). arXiv (2022). https://doi.org/10.48550/arXiv.2204.14198 4. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety (arXiv:1606.06565). arXiv (2016). https://doi.org/10.48550/arXiv.1606.06565 5. Araujo, T., Helberger, N., Kruikemeier, S., de Vreese, C.H.: In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc. 35(3), 611–623 (2020). https://doi.org/10.1007/s00146-019-00931-w
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|