Applied Computational Intelligence and Soft Computing, Volume 2024 , 01/01/2024

Design and Implement Deepfake Video Detection Using VGG-16 and Long Short-Term Memory

Laor Boongasame, Jindaphon Boonpluk, Sunisa Soponmanee, Jirapond Muangprathub, Karanrat Thammarak

Abstract

This study aims to design and implement deepfake video detection using VGG-16 in combination with long short-term memory (LSTM). In contrast to other studies, this study compares VGG-16, VGG-19, and the newest model, ResNet-101, including LSTM. All the models were tested using Celeb-DF video dataset. The result showed that the VGG-16 model with 15 epochs and 32 batch sizes had the highest performance. The results showed that the VGG-16 model with 15 epochs and 32 batch sizes exhibited the highest performance, with 96.25% accuracy, 93.04% recall, 99.20% specificity, and 99.07% precision. In conclusion, this model can be implemented practically.

Document Type

Article

Source Type

Journal

ASJC Subject Area

Computer Science : Artificial IntelligenceComputer Science : Computer Networks and CommunicationsComputer Science : Computer Science ApplicationsEngineering : Civil and Structural EngineeringEngineering : Computational Mechanics


Bibliography


Boongasame, L., Boonpluk, J., Soponmanee, S., Muangprathub, J., & Thammarak, K. (2024). Design and Implement Deepfake Video Detection Using VGG-16 and Long Short-Term Memory. Applied Computational Intelligence and Soft Computing, 2024doi:10.1155/2024/8729440

Copy | Save