Author:
Liao Hsuan-Cheng,Chou Han-Jung,Liu Jing-Sin
Abstract
The time-optimal control problem (TOCP) has faced new practical challenges, such as those from the deployment of agile autonomous vehicles in diverse uncertain operating conditions without accurate system calibration. In this study to meet a need to generate feasible speed profiles in the face of uncertainty, we exploit and implement probabilistic inference for learning control (PILCO), an existing sample-efficient model-based reinforcement learning (MBRL) framework for policy search, to a case study of TOCP for a vehicle that was modeled as a constant input-constrained double integrator with uncertain inertia subject to uncertain viscous friction. Our approach integrates learning, planning, and control to construct a generalizable approach that requires minimal assumptions (especially regarding external disturbances and the parametric dynamics model of the system) for solving TOCP approximately as the perturbed solutions close to time-optimality. Within PILCO, a Gaussian Radial basis functions is implemented to generate control-constrained rest-to-rest near time-optimal vehicle motion on a linear track from scratch with data-efficiency in a direct way. We briefly introduce the importance of the applications of PILCO and discuss the learning results that PILCO would actually converge to the analytical solution in this TOCP. Furthermore, we execute a simulation and a sim2real experiment to validate the suitability of PILCO for TOCP by comparing with the analytical solution.