Abstract
Nowadays advantages in face-based modification using DeepFake algorithms made it possible to replace a face of one person with a face of another person. Thus, it is possible to make not only copy-move modifications, but to implement artificial intelligence and deep learning for replacing face movements from one person to another. Still images can be converted into video sequences. Consequently, the contemporaries, historical figures or even animated characters can be lively presented. Deepfakes are becoming more and more successful and it is difficult to detect them in some cases. In this paper we explain the video sequences we produced (e.g. using X2Face method, and First Order Motion Model for Image Animation) and perform deepfake video analysis using SIFT (Scale Invariant Feature Transform) based approach. The experiments show the simplicity in video forgery production, as well as the possible role of SIFT keypoints detection in differentiation between the deeply forged and original video content.
Funder
Ministry of Education, Science and Technological Development of the Republic of Serbia
Publisher
Centre for Evaluation in Education and Science (CEON/CEES)
Subject
Computer Networks and Communications,Media Technology,Radiation,Signal Processing,Software
Reference19 articles.
1. New York Post, AI brings Mona Lisa to Life, https://nypost.com/2019/05/28/ai-brings-mona-lisa-to-life-losessignature-smile-in-process/, 28.05.2019;
2. D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International journal of computer vision, 60, no. 2, pp. 91-110, 2004;
3. O. Wiles, A. Sophia Koepke, and A. Zisserman, "X2face: A network for controlling face generation using images, audio, and pose codes," In Proceedings of the European Conference on Computer Vision (ECCV), pp. 670-686, 2018;
4. A. Siarohin, S. Lathuilière, S. Tulyakov, E. Ricci, and N. Sebe, "First Order Motion Model for Image Animation," Advances in Neural Information Processing Systems, pp. 7135-7145, 2019;
5. E. Zakharov, A. Shysheya, E. Burkov, and V. Lempitsky, "Few-shot adversarial learning of realistic neural talking head models," Proceedings of the IEEE International Conference on Computer Vision, pp. 9459-9468, 2019;
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献