Abstract
Purpose
The current research aimed to develop a concept open-source 3D printable, electronic wearable head gear to record jaw movement parameters.
Materials & methods
A 3D printed wearable device was designed and manufactured then fitted with open-source sensors to record vertical, horizontal and phono-articulatory jaw motions. Mean deviation and relative error were measured invitro. The device was implemented on two volunteers for the parameters of maximum anterior protrusion (MAP), maximum lateral excursion (MLE), normal (NMO), and maximum (MMO) mouth opening and fricative phono-articulation. Raw data was normalized using z-score and root mean squared error (RMSE) values were used to evaluate relative differences in readings across the two participants.
Results
RMSE differences across the left and right piezoresistive sensors demonstrated near similar bilateral movements during normal (0.12) and maximal mouth (0.09) opening for participant 1, while varying greatly for participant 2 (0.25 and 0.14, respectively). There were larger differences in RMSE during accelerometric motion in different axes for MAP, MLE and Fricatives.
Conclusion
The current implementation demonstrated that a 3D printed electronic wearable device with open-source sensor technology can record horizontal, vertical, and phono-articulatory maxillomandibular movements in two participants. However, future efforts must be made to overcome the limitations documented within the current experiment.
Funder
The University of Adelaide Paul Lee Bequest
Early Grant Development
Publisher
Public Library of Science (PLoS)
Reference33 articles.
1. Virtual articulators and virtual mounting procedures: where do we stand?;L Lepidi;Journal of Prosthodontics,2021
2. Temporomandibular joint disorders and orofacial pain;M Ahmad;Dental Clinics,2016
3. Variables influencing the device-dependent approaches in digitally analysing jaw movement—a systematic review.;TH Farook;Clin Oral Investig,2022
4. Visual Diagnostics of Dental Caries through Deep Learning of Non-Standardised Photographs Using a Hybrid YOLO Ensemble and Transfer Learning Model;A Tareq;Int J Environ Res Public Health,2023
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献