We present methods that reduce the influence of non-relevant facial attributes on the performance of biometric face recognition systems, thereby boosting verification reliability under changes in illumination and head pose. With our approach, each image is first embedded in a high-dimensional identity space; by analysing class centroids we then identify directions that capture variations of identity-irrelevant properties. These directions are used to quantify the strength of unwanted facial attribute and several methods are designed to then eliminate the information associated with these attributes from the facial embeddings.
The proposed approaches are evaluated on state-of-the-art verification models with diverse models—ArcFace, CosFace, AdaFace and SwinFace—using two datasets, MultiPIE and CPLFW. Performance is measured at key operating points on ROC curves.
Results confirm that the methods improve verification accuracy, although none is universally optimal across all architectures. The two most promising strategies are equalising an attribute by shifting the test embedding toward the expression level of a prototype embedding, and averaging across all attribute classes. Future work could model attribute-change directions with dedicated functions, select architecture-specific directions, adopt a more suitable dataset, and incorporate multiple facial attributes simultaneously when defining change directions.
|