The human iris is considered an extremely safe and reliable physiological modality and is thus often used in biometric recognition systems. A crucial pre-processing step for reliable and accurate iris recognition lies in iris segmentation, a process that determines which part of the captured image belongs to the iris. Iris segmentation has in recent years shifted from traditional algorithms to deep learning approaches, which have many advantages.
In our work, we follow the trend of using deep learning for solving the task of iris segmentation as we try to further improve the achieved accuracy of iris segmentation using multi-task learning. For this purpose, we develop and evaluate different single-task and multi-task learning models, whose architecture is based on the classic U-Net network, which we additionally modify. We also assess the effect of using different auxiliary tasks and loss weights on the iris segmentation accuracy. Besides the auxiliary task of image inpainting, we also evaluate the performance of models built using the auxiliary tasks of image denoising and colourization of grey images.
The chosen models are trained and evaluated on the MOBIUS and SBVPI datasets, where the auxiliary task of image inpainting achieves the best performance among the tested multi-task learning auxiliary tasks on both datasets. The iris segmentation performance of the single-task learning model is improved by using the multi-task learning model with image inpainting chosen as the auxiliary task only when evaluated on the SBVPI dataset, which we contribute to the differences between the datasets. We also demonstrate that choosing bigger auxiliary tasks' loss weights adversely impacts the performance of iris segmentation because of their increased influence on the training of the models.
|