Publisher
Springer Science and Business Media LLC
Reference61 articles.
1. Agarwal, C., Tanneru, S. H., & Lakkaraju, H. (2024). Faithfulness vs. Plausibility: On the (Un) reliability of explanations from large language models. arXiv Preprint arXiv, 2402, 04614.
2. Andreas, J. (2022). Language models as agent models. arXiv preprint:arXiv, 2212, 01681.
3. Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., et al. (2022). Constitutional AI: Harmlessness from AI feedback. Arxiv. https://doi.org/10.48550/arxiv.2212.08073.
4. Belanger, A. (7/7/2023). ChatGPT usage drop for the first time as users turn to uncensored chatbots. Ars Technica.
5. Bender, E. M., Gebru, T., Mcmillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, https://doi.org/10.1145/3442188.3445922.