Since 2022, we have been working on a research project funded by the State Investigation Agency, focused on developing an explainable approach for the detection of advanced facial attacks in biometric systems. With the increasing adoption of automated identification technologies—especially in sensitive environments such as border control—the need for robust and trustworthy security mechanisms has become more relevant than ever. This project explores how artificial intelligence can not only improve detection performance, but also provide more interpretable results, helping to better understand how and why certain threats are identified.
One of the main challenges addressed in this work is the detection of morphing attacks, where two different facial identities are blended into a single image to deceive biometric systems. As part of this research, we have explored approaches based on convolutional neural networks, including a de-morphing strategy that compares the stored image (for example, in a passport) with a real-time capture of the user. By analyzing the differences between both, the system can effectively reveal inconsistencies and uncover potential impostors, even in scenarios where traditional methods struggle. The results obtained so far show promising accuracy levels, reinforcing the potential of these techniques for real-world deployment.
Beyond the technical contributions, this project reflects our continued interest in addressing complex, high-impact problems where security, reliability, and explainability go hand in hand. As the project moves forward, we aim to keep refining these methods and exploring new directions, always with an eye on practical applications and collaboration opportunities with other groups working on biometric security and trustworthy AI.