1. Agrawal, R., Realff, M. J., & Lee, J. H. (2013). MILP based value backups in partially observed Markov decision processes (POMDPs) with very large or continuous action and observation spaces. Computers & Chemical Engineering, 56, 101–113.
2. Amato, C., Konidaris, G., Cruz, G., Maynor, C. A., How, J. P., & Kaelbling, L. P. (2014). Planning for decentralized control of multiple robots under uncertainty. In ICAPS-14 Workshop on Planning and Robotics.
3. Araya-López, M. (2013). Des algorithmes presque optimaux pour les problémes de décisione séquentielle à des fins de collecte d’information. PhD thesis, University of Lorraine.
4. Araya-López, M., Buffet, O., Thomas, V., & Charpillet, F. (2010). A POMDP extension with belief-dependent rewards. In Advances in Neural Information Processing Systems, Vol. 23.
5. Barbosa, M., Bernardino, A., Figueira, D., Gaspar, J., Gonçalves, N., Lima, P. U., Moreno, P., Pahliani, A., Santos-Victor, J., Spaan, M. T. J., & Sequeira, J. (2009). ISRobotNet: A testbed for sensor and robot network systems. In Proceedings of International Conference on Intelligent Robots and Systems.