Publisher
Springer Nature Switzerland
Reference9 articles.
1. Harb, J., Riedmann, S., Wegenkittl, S.: Strategies for developing a supervisory controller with deep reinforcement learning in a production context. In: 2022 IEEE Conference on Control Technology and Applications (CCTA), pp. 869–874 (2022). https://doi.org/10.1109/CCTA49430.2022.9966086
2. Hermann, M., Pentek, T., Otto, B.: Design principles for industrie 4.0 scenarios. In: 2016 49th Hawaii International Conference on System Sciences (HICSS), pp. 3928–3937 (2016). https://doi.org/10.1109/HICSS.2016.488
3. Kober, J., Bagnell, J.A., Peters, J.: Reinforcement learning in robotics: A survey. Int. J. Robot. Res. 32(11), 1238–1274 (2013)
4. Kozlica, R., Wegenkittl, S., Hirländer, S.: Deep q-learning versus proximal policy optimization: Performance comparison in a material sorting task, submitted to 32nd International Symposium on Industrial Electronics (ISIE)
5. Mahnke, W., Leitner, S.H., Damm, M.: OPC Unified Architecture. Springer Science & Business Media (2009)