The amount of visual content containing sensitive identity information has steadily grown in the past years. This has led to the development of effective privacy enhancing mechanisms. In our work we focus on face images, which can be used in automatic face recognition systems. The approaches presented in related papers frequently use de-identification mechanisms, which remove only identity information from the data, while preserving other information. For the purpose of de-identification of face images we use different privacy enhancing mechanisms. Our main goal is to verify if the mechanisms can be used with a face swapping approach.
In our work we focus on three different privacy enhancing mechanisms. The first is an adversarial approach, which adds perturbations into the face image. We chose to use the Fast Gradient Sign Method, which we enhanced using an ensemble of models and binary face masks. The second mechanism is based on the k-same method, where we generate artificial face identities using clustering and the StyleGAN generator, which replace the faces of the source images. The last mechanism is based on ε-differential privacy, where we generate artificial face identities by introducing noise to the StyleGAN embeddings of images.
The results of individual privacy enhancing mechanisms showed that all implementations were able to provide some form of de-identification. Combining individual mechanisms with a face swapping approach lowered the de-identification capabilities of all mechanisms. Using prior mechanisms such as the k-same method or ε-differential privacy with posterior adversarial mechanisms seemed to further improve de-identification capabilities, even when combined with a face swapping approach.
|