1. Abbeel, P., & Ng, AY. (2004). Apprenticeship learning via inverse reinforcement learning. In: C. E. Brodley (Ed.), Machine learning, Proceedings of the twenty-first international conference (ICML 2004), ACM International Conference Proceeding Series, vol 69. ACM https://doi.org/10.1145/1015330.1015430,
2. Acharya, A., Russell, R.L., & Ahmed, N.R. (2020). Explaining conditions for reinforcement learning behaviors from real and imagined data. NeurIPS Workshop on Challenges of Real-World RL https://doi.org/10.48550/ARXIV.2011.09004
3. Achiam, J. (2018). Spinning up in deep reinforcement learning. https://spinningup.openai.com/en/latest/index.html
4. Adebayo, J., Gilmer, J., Muelly, M., et al. (2018). Sanity checks for saliency maps. In S. Bengio , H. M. Wallach, H. Larochelle et al. (Eds.), Advances in neural information processing systems 31: Annual conference on neural information processing systems NeurIPS 2018, Montréal, pp 9525–9536, https://proceedings.neurips.cc/paper/2018/hash/294a8ed24b1ad22ec2e7efea049b8737-Abstract.html
5. Adebayo, J., Muelly, M., Abelson, H., et al. (2022). Post hoc explanations may be ineffective for detecting unknown spurious correlation. In The tenth international conference on learning representations, ICLR 2022, Virtual Event. OpenReview.net, https://openreview.net/forum?id=xNOVfCCvDpM