Abstract
Facial retouching in supporting documents can have adverse effects, undermining the credibility and authenticity of the information presented. This paper presents a comprehensive investigation into the classification of retouched face images using a fine-tuned pre-trained VGG16 model. We explore the impact of different train-test split strategies on the performance of the model and also evaluate the effectiveness of two distinct optimizers. The proposed fine-tuned VGG16 model with “ImageNet” weight achieves a training accuracy of 99.34 % and a validation accuracy of 97.91 % over 30 epochs on the ND-IIITD retouched faces dataset. The VGG16_Adam model gives a maximum classification accuracy of 96.34 % for retouched faces and an overall accuracy of 98.08 %. The experimental results show that the 50 % - 25 % train-test split ratio outperforms other split ratios mentioned in the paper. The demonstrated work shows that using a Transfer Learning approach reduces computational complexity and training time, with a max. training duration of 39.34 min for the proposed model.
Publisher
Publishing House for Science and Technology, Vietnam Academy of Science and Technology (Publications)
Reference26 articles.
1. Russello S. - The Impact of Media Exposure on Self-Esteem and Body Satisfaction in Men and Women, Vol. 1, 2009.
2. Gupta S. - JIPR 10 (6) (2005) 491-498.
3. Altabe M. - Ethnicity and body image: Quantitative and qualitative analysis, Int. J. Eat. Disord. 23 (2) (1998) 153-159. doi: 10.1002/(SICI)1098-108X(199803)23:2<153::AID-EAT5>3.0.CO;2-J.
4. Kee E. and Farid H. - A perceptual metric for photo retouching, Proc. Natl. Acad. Sci. U. S. A. 108 (50) (2011) 19907-19912. doi: 10.1073/pnas.1110747108.
5. Kose N., Apvrille L., and Dugelay J. L. - Facial makeup detection technique based on texture and shape analysis, 2015 11th IEEE Int. Conf. Work. Autom. Face Gesture Recognition, FG 2015, 2015, doi: 10.1109/FG.2015.7163104.