Author:
Roshan M. Mahindra,Rakesh S.,Gnana Guru T. Sri,Rohith B.,Hemalatha J.
Abstract
The Rubik’s cube is a prototypical combinatorial puzzle that has a large state space with a single goal state. The goal state is unlikely to be retrieved using orders of randomly generated moves, posing unique challenges for machine learning. The proposed work is above to solve the Rubik’s cube with recursion and DeepCubeA, a deep reinforcement learning approach that learns how to solve increasingly difficult states in reverse from the goal state without any specific domain knowledge. DeepCubeA solves 100% of all test patterns, finding a shortest path to the goal state 60.3% of the time. Deep Cube A generalizes to other combinatorial puzzles andis able to solve the 15 puzzle, 24 puzzle, 35 puzzle, 48 puzzle, Lights Out and Sokoban, finding a shortest path in the majority of verifiable cases. These models were trained with 1 4 GPUs and 20 30 CPUs. This varies throughout training as the training is often stopped and started again to make room for other processes. Further our experimentation compares the results of Rubik’s cube solving among both recursion and DeepCubeA and also with the state of art models. Later, we intend to develop a new deep learning model with an application.