Abstract:
As technology has improved over the past few years, techniques that make and change multimedia content have become more realistic. The difference between real and fake media is becoming less clear. On the one hand, this can be used in a lot of interesting ways, from the creative arts to advertising, filmmaking. It also has a lot of security problems. There are free software packages on the internet that anyone can use to make fake pictures and videos that look very real. Deepfakes can be used to change how people vote, commit fraud, hurt people’s reputations, or even blackmail them. Abuse is limited only by what people can think of. Because of this, there is dire need for automated tools that can find dangerously false multimedia content and stop it from spreading. Our method can automatically find fakes that use deep replacement and reenactment. The proposed system uses a "Resnet50 Convolutional Neural Network" to pull out frame-level features, which are then used to train a "Long Short Term Memory (LSTM) based Recurrent Neural Network (RNN)" to figure out if the video has been changed or not.