As part of our Master's thesis, we set out to build a model that could detect deepfakes.
In general, models for identifying deepfakes are better when they are trained on as many different datasets of deepfakes as possible that are generated by as many methods for generating them as possible because this means that they generalise better.
However, our approach to achieving better generalisation is one-class, meaning we only use a class of real images to represent the problem.
From these images, we create synthetic fake images using the Self-Blending Images method.
Neural networks are generally not considered to be easily interpretable, so we added segmentation to the learning process.
Our model labels the part of the face in the image that it assumes has been forged, and based on the correctness of this mask, the model learns to identify deepfakes.
We evaluate the masks generated by our model on six datasets.
We evaluate the classification of the model on the datasets on which the authors of other models also assess, and on average we obtain the best results.
|