Publisher
Springer Nature Switzerland
Reference17 articles.
1. Schramowski, P., Turan, C., Andersen, N., Rothkopf, C.A., Kersting, K.: Large Pre-trained Language Models Contain Human like Biases of What is Right and Wrong to Do (2022). arXiv:2103.11790
2. Simmons, G.: Moral Mimicry: Large Language Models Produce Moral Rationalizations Tailored to Political Identity (2023). arXiv:2209.12106
3. Ramezani, A., Xu, Y.: Knowledge of cultural moral norms in large language models (2023). arXiv:2306.01857v1
4. Rao, A., Khandelwal, A., Tanmay, K., Agarwal, U., Choudhury, M.: Ethical Reasoning over Moral Alignment: A Case and Framework for In-Context Ethical Policies in LLMs (2023). arXiv:2310.07251
5. Scherrer, N., Shi, C., Feder, A., Blei, D.M.: Evaluating the Moral Beliefs Encoded in LLMs (2023). arXiv:2307.14324v1