Safeguarding human values: rethinking US law for generative AI’s societal impacts
-
Published:2024-05-07
Issue:
Volume:
Page:
-
ISSN:2730-5953
-
Container-title:AI and Ethics
-
language:en
-
Short-container-title:AI Ethics
Author:
Cheong InyoungORCID, Caliskan Aylin, Kohno Tadayoshi
Abstract
AbstractOur interdisciplinary study examines the effectiveness of US law in addressing the complex challenges posed by generative AI systems to fundamental human values, including physical and mental well-being, privacy, autonomy, diversity, and equity. Through the analysis of diverse hypothetical scenarios developed in collaboration with experts, we identified significant shortcomings and ambiguities within the existing legal protections. Constitutional and civil rights law currently struggles to hold AI companies responsible for AI-assisted discriminatory outputs. Moreover, even without considering the liability shield provided by Section 230, existing liability laws may not effectively remedy unintentional and intangible harms caused by AI systems. Demonstrating causal links for liability claims such as defamation or product liability proves exceptionally difficult due to the intricate and opaque nature of these systems. To effectively address these unique and evolving risks posed by generative AI, we propose a “Responsible AI Legal Framework” that adapts to recognize new threats and utilizes a multi-pronged approach. This framework would enshrine fundamental values in legal frameworks, establish comprehensive safety guidelines, and implement liability models tailored to the complexities of human-AI interactions. By proactively mitigating unforeseen harms like mental health impacts and privacy breaches, this framework aims to create a legal landscape capable of navigating the exciting yet precarious future brought forth by generative AI technologies.
Funder
National Institute of Standards and Technology William and Flora Hewlett Foundation Silicon Valley Community Foundation
Publisher
Springer Science and Business Media LLC
Reference214 articles.
1. Bommasani, R., Hudson, D.A., Adeli, E., Altman, R., Arora, S., Arx, S., Bernstein, M.S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., Creel, K., Davis, J.Q., Demszky, D., Donahue, C., Doumbouya, M., Durmus, E., Ermon, S., Etchemendy, J., Ethayarajh, K., Fei-Fei, L., Finn, C., Gale, T., Gillespie, L., Goel, K., Goodman, N., Grossman, S., Guha, N., Hashimoto, T., Henderson, P., Hewitt, J., Ho, D.E., Hong, J., Hsu, K., Huang, J., Icard, T., Jain, S., Jurafsky, D., Kalluri, P., Karamcheti, S., Keeling, G., Khani, F., Khattab, O., Koh, P.W., Krass, M., Krishna, R., Kuditipudi, R., Kumar, A., Ladhak, F., Lee, M., Lee, T., Leskovec, J., Levent, I., Li, X.L., Li, X., Ma, T., Malik, A., Manning, C.D., Mirchandani, S., Mitchell, E., Munyikwa, Z., Nair, S., Narayan, A., Narayanan, D., Newman, B., Nie, A., Niebles, J.C., Nilforoshan, H., Nyarko, J., Ogut, G., Orr, L., Papadimitriou, I., Park, J.S., Piech, C., Portelance, E., Potts, C., Raghunathan, A., Reich, R., Ren, H., Rong, F., Roohani, Y., Ruiz, C., Ryan, J., Ré, C., Sadigh, D., Sagawa, S., Santhanam, K., Shih, A., Srinivasan, K., Tamkin, A., Taori, R., Thomas, A.W., Tramèr, F., Wang, R.E., Wang, W., Wu, B., Wu, J., Wu, Y., Xie, S.M., Yasunaga, M., You, J., Zaharia, M., Zhang, M., Zhang, T., Zhang, X., Zhang, Y., Zheng, L., Zhou, K., Liang, P.: On the Opportunities and Risks of Foundation Models (2022) 2. Wolfe, R., Yang, Y., Howe, B., Caliskan, A.: Contrastive language-vision AI models pretrained on web-scraped multimodal data exhibit sexual objectification bias. In: ACM Conference on Fairness, Accountability, and Transparency (2023) 3. Sheng, E., Chang, K.-W., Natarajan, P., Peng, N.: The woman worked as a babysitter: on biases in language generation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3407–3412 (2019). https://doi.org/10.18653/v1/D19-1339 . https://aclanthology.org/D19-1339 4. Reuters: Australian mayor prepares world’s first defamation lawsuit over ChatGPT content. The Guardian (2023) 5. Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y.J., Madotto, A., Fung, P.: Survey of hallucination in natural language generation. ACM Comput. Surv. 55(12), 1–38 (2023). https://doi.org/10.1145/3571730
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|