Abstract
AbstractUnlike traditional hierarchical controllers for robotic leg prostheses and exoskeletons, continuous systems could allow persons with mobility impairments to walk more naturally in real-world environments without requiring high-level switching between locomotion modes. To support these next-generation controllers, we developed a new system calledKIFNet(Kinematics and Image Fusing Network) that uses lightweight and efficient deep learning models to continuously predict the leg kinematics during walking. We tested different sensor fusion methods to combine kinematics data from inertial sensors and computer vision data from smart glasses and found that adaptive instance normalization achieved the lowest RMSE predictions for knee and ankle joint kinematics. We also deployed our model on an embedded device. Without inference optimization, our model was 20 times faster than the previous state-of-the-art and achieved 20% higher prediction accuracies, and during some locomotor activities like stair descent, decreased RMSE up to 300%. With inference optimization, our best model achieved 125 FPS on an NVIDIA Jetson Nano. These results demonstrate the potential to build fast and accurate deep learning models for continuous prediction of leg kinematics during walking based on sensor fusion and embedded computing, therein providing a foundation for real-time continuous controllers for robotic leg prostheses and exoskeletons.
Publisher
Cold Spring Harbor Laboratory
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献