Affiliation:
1. Beijing Forestry University, Beijing 100083, China
Abstract
With the advancement of generative models, face forgeries are becoming increasingly realistic, making face forgery detection a hot topic in research. The primary challenge in face forgery detection is the inadequate generalization performance. Numerous studies have proposed solutions to this issue; however, some methods heavily rely on the overall feature space of training samples, interfering with the extraction of key features for detection. Additionally, some studies design disentangled frameworks that overlook data diversity, limiting their effectiveness in complex real-world scenarios. This paper presents a model framework based on adversarial training and disentanglement strategy. Adversarial training is employed to generate forged samples that imitate the face forgery process, specifically targeting certain facial areas to simulate face forgery effects, which enriches data diversity. Simultaneously, the feature disentanglement strategies are employed to focus the model on forgery features, with a mutual information loss function designed to obtain the disentanglement effect. Additionally, an adversarial loss based on mutual information is designed to further enhance the disentanglement effect. On the FaceForensics++ dataset, our method achieves an AUC of 96.75%. Simultaneously, it demonstrates outstanding performance in cross-method evaluations with an accuracy of 80.32%. In cross-dataset experiments, our method also exhibits excellent performance.
Reference36 articles.
1. Generative adversarial nets;Goodfellow;Adv. Neural Inf. Process. Syst.,2014
2. (2023, July 08). FaceSwap. Available online: https://github.com/MarekKowalski/FaceSwap.
3. Conotter, V., Bodnari, E., Boato, G., and Farid, H. (2014, January 27–30). Physiologically-based detection of computer generated faces in video. Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France.
4. Li, Y., Chang, M.C., and Lyu, S. (2018, January 11–13). In ictu oculi: Exposing ai created fake videos by detecting eye blinking. Proceedings of the 2018 IEEE International Workshop on Information Forensics and Security (WIFS), Hong Kong, China.
5. Nataraj, L., Mohammed, T.M., Chandrasekaran, S., Flenner, A., Bappy, J.H., Roy-Chowdhury, A.K., and Manjunath, B. (2019). Detecting GAN generated fake images using co-occurrence matrices. arXiv.