Affiliation:
1. University of Helsinki, Finland
2. Universiti Kebangsaan Malaysia, Malaysia
Abstract
Precise modeling of hand tracking from monocular camera calibration parameters using semantic cues is an active area of research for the researchers due to lack of accuracy and computational overheads. In this context, deep learning-based framework (i.e., convolutional neural network-based human hands tracking in the current camera frame) has become an active research problem. In addition, tracking based on monocular camera needs to be addressed due to updated technology such as Unity3D engine and other related augmented reality plugins. This research aims to track human hands in continuous frame by using the tracked points to draw 3D model of the hands as an overlay. In the proposed methodology, Unity3D environment was used for localizing hand object in augmented reality (AR). Later, convolutional neural network was used to detect hand palm and hand keypoints based on cropped region of interest (ROI). The proposed method achieved accuracy rate of 99.2% where single monocular true images were used for tracking. Experimental validation shows the efficiency of the proposed methodology.
Reference26 articles.
1. Mathematics learning instrument using augmented reality for learning 3D geometry
2. Bazarevsky, V., Kartynnik, Y., Vakunov, A., Raveendran, K., & Grundmann, M. (2019). Blazeface: Sub-millisecond neural face detection on mobile gpus. arXiv preprint arXiv:1907.05047.
3. Doosti, B. (2019). Hand pose estimation: A survey. arXiv preprint arXiv:1903.01013.
4. 3D Hand Shape and Pose Estimation From a Single RGB Image
5. MEgATrack
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献