Author:
Uttrani Shashank,Rao Akash K.,Kanekar Bhavik,Vohra Ishita,Dutt Varun
Publisher
Springer Nature Singapore
Reference25 articles.
1. Bellemare, M. G., Naddaf, Y., Veness, J., & Bowling, M. (2013). The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47. https://doi.org/10.1613/jair.3912 [Record #8 is using a reference type undefined in this output style.].
2. Dwibedi, D., & Vemula, A. (2020). Playing Games with Deep Reinforcement Learning.
3. Firestone, C. (2020). Performance vs. competence in human–machine comparisons. Proceedings of the National Academy of Sciences, 117(43), 26562. https://doi.org/10.1073/pnas.1905334117.
4. Gao, Y., Tebbe, J., & Zell, A. (2021). Optimal stroke learning with policy gradient approach for robotic table tennis. arXiv:2109.03100.
5. Haarnoja, T., Zhou, A., Abbeel, P., & Levine, S. (2018a). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research. https://proceedings.mlr.press/v80/haarnoja18b.html