Abstract
AbstractDriver steering intention prediction provides an augmented solution to the design of an onboard collaboration mechanism between human driver and intelligent vehicle. In this study, a multi-task sequential learning framework is developed to predict future steering torques and steering postures based on upper limb neuromuscular electromyography signals. The joint representation learning for driving postures and steering intention provides an in-depth understanding and accurate modelling of driving steering behaviours. Regarding different testing scenarios, two driving modes, namely, both-hand and single-right-hand modes, are studied. For each driving mode, three different driving postures are further evaluated. Next, a multi-task time-series transformer network (MTS-Trans) is developed to predict the future steering torques and driving postures based on the multi-variate sequential input and the self-attention mechanism. To evaluate the multi-task learning performance and information-sharing characteristics within the network, four distinct two-branch network architectures are evaluated. Empirical validation is conducted through a driving simulator-based experiment, encompassing 21 participants. The proposed model achieves accurate prediction results on future steering torque prediction as well as driving posture recognition for both two-hand and single-hand driving modes. These findings hold significant promise for the advancement of driver steering assistance systems, fostering mutual comprehension and synergy between human drivers and intelligent vehicles.
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献