1. Zhao, W., et al.: A survey of large language models. arXiv:2303.18223 (2023)
2. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 (2019)
3. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training. Preprint (2018)
4. Xie, T., et al.: Darwin series: domain specific large language models for natural science. arXiv:2308.13565 (2023)
5. Wu, C., et al.: PMC-LLaMA: towards building open-source language models for medicine. arXiv:2304.14454 (2023)