1. [1] Adamkiewicz, M.: Q-learning for 2048: Exploring combinations of reinforcement learning and game tree search (2018), available from <http://web.stanford.edu/class/archive/cs/cs221/cs221.1192/2018/restricted/posters/mikadam/poster.pdf>.
2. [2] Allik, K., Rebane, R.-M., Sepp, R. and Valgma, L.: 2048 Report, available from <https://neuro.cs.ut.ee/wp-content/uploads/2018/02/alphago.pdf> (2018).
3. [3] Ballard, B.W.: The *-minimax search procedure for trees containing chance nodes, Artificial Intelligence, Vol.21, No.3, pp.327-350 (1983).
4. [4] Cirulli, G.: 2048, available from <http://gabrielecirulli.github.io/2048/> (2014).
5. [5] David, O.E., Netanyahu, N.S. and Wolf, L.: DeepChess: End-to-End Deep Neural Network for Automatic Learning in Chess, International Conference on Artificial Neural Networks and Machine Learning (ICANN 2016), pp.88-96 (2016).