Exposing Deep Fake Face Detection using LSTM and CNN
-
Published:2024-05-23
Issue:
Volume:
Page:231-234
-
ISSN:2581-9429
-
Container-title:International Journal of Advanced Research in Science, Communication and Technology
-
language:en
-
Short-container-title:IJARSCT
Author:
Alisha Muskaan 1, Nagarathna S 1, Sandhya C S 1, Viju J 1, B Sumangala 1
Affiliation:
1. Sir M Visvesvaraya Institute of Technology, Bengaluru, India
Abstract
The rapid advancement of deep learning techniques, creating realistic multimedia content has become increasingly accessible, leading to the proliferation of DeepFake technology. DeepFake utilizes generative deep learning algorithms to produce or modify face features in a highly realistic manner, often making it challenging to differentiate between real and manipulated media. This technology, while beneficial in fields such as entertainment and education, also poses significant threats, including misinformation and identity theft. Consequently, detecting DeepFakes has become a critical area of research. In this paper, we propose a novel approach to DeepFake face detection by integrating Convolutional Neural Networks (CNN) with Long Short-Term Memory (LSTM) networks. Our method leverages the strengths of CNNs in spatial feature extraction and LSTMs in temporal sequence modeling to enhance detection accuracy. The CNN component captures intricate facial features, while the LSTM analyzes the temporal dynamics of video frames. We evaluate our model on several benchmark datasets, including Celeb-DF (v2), DeepFake Detection Challenge Preview, and FaceForensics++. Experimental results demonstrate that our hybrid CNN-LSTM model achieves state-of-the-art performance, surpassing existing methods in both accuracy and robustness. This study highlights the potential of combining CNN and LSTM architectures for effective DeepFake detection, contributing to the ongoing efforts to safeguard against digital media manipulation
Publisher
Naksh Solutions
Reference17 articles.
1. [1] H.Farid, Image forgery detection, IEEE Signal Process. Mag., vol. 26, no. 2, pp. 1625, Mar. 2009. 2. [2] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial nets, in Proc. Adv. Neural Inf. Process. Syst., vol. 27, 2014, pp. 19. 3. [3] P. Baldi, Autoencoders, unsupervised learning, and deep architectures, in Proc. ICML Workshop Unsupervised Transf. Learn., 2012, pp. 3749. 4. [4] T. Karras, S. Laine, and T. Aila, A style-based generator architecture for generative adversarial networks, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 44014410. 5. [5] Y. Mirsky and W. Lee, The creation and detection of deep fakes: A survey, ACM Comput. Surv., vol. 54, no. 1, pp. 141, Jan. 2022. [6] S.L. Strunic, F. Rios-Gutierrez, R. Alba-Flores, G. Nordehn, and S. Bums,
|
|