ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.04143
48
23
v1v2v3 (latest)

The Unconstrained Ear Recognition Challenge 2019 - ArXiv version With Appendix

11 March 2019
Žiga Emeršič
S. V. A. Kumar
B. Harish
Weronika Gutfeter
J. Khiarak
Andrzej Pacut
E. Hansley
Maurício Pamplona Segundo
Sudeep Sarkar
Hyeon-Nam Park
G. Nam
Ig-Jae Kim
S. G. Sangodkar
Umit Kacar
M. Kirci
Li Yuan
Jishou Yuan
Haonan Zhao
Fei Lu
Junying Mao
Xiaoshuang Zhang
Dogucan Yaman
Fevziye Irem Eyiokur
Kadir Bulut Özler
H. K. Ekenel
D. P. Chowdhury
Sambit Bakshi
Pankaj K. Sa
B. Majhi
Peter Peer
ArXiv (abs)PDFHTML
Abstract

This paper presents a summary of the 2019 Unconstrained Ear Recognition Challenge (UERC), the second in a series of group benchmarking efforts centered around the problem of person recognition from ear images captured in uncontrolled settings. The goal of the challenge is to assess the performance of existing ear recognition techniques on a challenging large-scale ear dataset and to analyze performance of the technology from various viewpoints, such as generalization abilities to unseen data characteristics, sensitivity to rotations, occlusions and image resolution and performance bias on sub-groups of subjects, selected based on demographic criteria, i.e. gender and ethnicity. Research groups from 12 institutions entered the competition and submitted a total of 13 recognition approaches ranging from descriptor-based methods to deep-learning models. The majority of submissions focused on deep learning approaches and hybrid techniques combining hand-crafted and learned image descriptors. Our analysis shows that hybrid and deep-learning-based approaches significantly outperform traditional hand-crafted approaches. We argue that this is a good indicator of where ear recognition will be heading in the future. Furthermore, the results in general improve upon the UERC 2017 and display the steady advancement of the ear recognition.

View on arXiv
Comments on this paper