298

Cross-Sensor and Cross-Spectral Periocular Biometrics: A Comparative Benchmark including Smartphone Authentication

Abstract

The massive availability of cameras results in a wide variability of imaging conditions, producing large intra-class variations and a significant performance drop if images from heterogeneous environments are compared for person recognition. However, as biometrics is deployed, it will be common to replace damaged or obsolete hardware, or to exchange information between applications in heterogeneous environments. Variations in spectral bands can also occur. For example, faces are typically acquired in the visible (VIS) spectrum, while iris is captured in near-infrared (NIR). However, cross-spectrum comparison may be needed if a face from a surveillance camera needs to be compared against a legacy iris database. Here, we propose a multialgorithmic approach to cope with periocular images from different sensors. We integrate different comparators with a fusion scheme based on linear logistic regression, in which scores tend to be log-likelihood ratios. This allows easy interpretation of output scores and the use of Bayes thresholds for optimal decision-making, since scores from different comparators are in the same probabilistic range. We evaluate our approach in the context of the Cross-Eyed Competition, whose aim was to compare recognition approaches when NIR and VIS periocular images are matched. Our approach achieves reductions in error rates of up to 30-40% in cross-spectral NIR-VIS comparisons, leading to EER=0.2% and FRR of just 0.47% at FAR=0.01%, representing the best overall approach of the competition. Experiments are also reported with a database of VIS images from different smartphones. We also discuss the impact of template size and computation times, with the most computationally heavy comparator playing an important role in the results. Lastly, the proposed method is shown to outperform other popular fusion approaches, such as the average of scores, SVMs or Random Forest.

View on arXiv
Comments on this paper