Dual-Modality Cross-Interaction-Based Hybrid Full-Frame Video Stabilization
-
Published:2024-05-18
Issue:10
Volume:14
Page:4290
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Jang Jaeyoung1, Ban Yuseok1, Lee Kyungjae2ORCID
Affiliation:
1. Department of Electronics Engineering, Chungbuk National University, 1 Chungdae-ro, Seowon-gu, Cheongju 28644, Republic of Korea 2. School of Artificial Intelligence, Yong In University, 134 Yongindaehak-ro, Cheoin-gu, Yongin 17092, Republic of Korea
Abstract
This study aims to generate visually useful imagery by preventing cropping while maintaining resolution and minimizing the degradation of stability and distortion to enhance the stability of a video for Augmented Reality applications. The focus is placed on conducting research that balances maintaining execution speed with performance improvements. By processing Inertial Measurement Unit (IMU) sensor data using the Versatile Quaternion-based Filter algorithm and optical flow, our research first applies motion compensation to frames of input video. To address cropping, PCA-flow-based video stabilization is then performed. Furthermore, to mitigate distortion occurring during the full-frame video creation process, neural rendering is applied, resulting in the output of stabilized frames. The anticipated effect of using an IMU sensor is the production of full-frame videos that maintain visual quality while increasing the stability of a video. Our technique contributes to correcting video shakes and has the advantage of generating visually useful imagery at low cost. Thus, we propose a novel hybrid full-frame video stabilization algorithm that produces full-frame videos after motion compensation with an IMU sensor. Evaluating our method against three metrics, the Stability score, Distortion value, and Cropping ratio, results indicated that stabilization was more effectively achieved with robustness to flow inaccuracy when effectively using an IMU sensor. In particular, among the evaluation outcomes, within the “Turn” category, our method exhibited an 18% enhancement in the Stability score and a 3% improvement in the Distortion value compared to the average results of previously proposed full-frame video stabilization-based methods, including PCA flow, neural rendering, and DIFRINT.
Funder
Chungbuk National University National Research Foundation of Korea
Reference26 articles.
1. Lee, J., Hafeez, J., Kim, K., Lee, S., and Kwon, S. (2019). A novel real-time match-moving method with HoloLens. Appl. Sci., 9. 2. Nunes, J.S., Almeida, F.B., Silva, L.S., Santos, V.M., Santos, A.A., de Senna, V., and Winkler, I. (2023). Three-dimensional coordinate calibration models for augmented reality applications in indoor industrial environments. Appl. Sci., 13. 3. Shi, J. (1994, January 21–23). Good features to track. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA. 4. Grundmann, M., Kwatra, V., and Essa, I. (2011, January 20–25). Auto-directed video stabilization with robust l1 optimal camera paths. Proceedings of the CVPR 2011, Colorado Springs, CO, USA. 5. Liu, Y.L., Lai, W.S., Yang, M.H., Chuang, Y.Y., and Huang, J.B. (2021, January 11–17). Hybrid neural fusion for full-frame video stabilization. Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|