Personal Authentication Method Using Geometrical Features of External Auditory Canal

Open Access
Article
Conference Proceedings
Authors: Yuki MuramotoYoshihisa NakatohHideaki Kawano

Abstract: Gloves and masks make fingerprint and face recognition difficult in hospitals and factories. In this research, as a precondition for developing an authentication system based on geometrical features inside the ear canal using a camera, we will verify whether it is possible to identify individuals from images inside the ear canal using machine learning. The method is to acquire image data of the ear canal using a camera and identify individuals using the image data and a model that has been learned in advance. As a preliminary step to machine learning, we conducted a questionnaire to investigate whether it is possible to identify the inside of the ear canal using human recognition ability. The questionnaire results showed that the correct response rate was approximately 64%. We also found that the correct response rate decreased for questions such as "select the image of the left ear from the image of the right ear," even for the same person. In this study, the VGG16 model was re-trained by transfer learning because of the small amount of training data we had prepared. The number of classes was 26 since the data we prepared was for 13 people and the number of left and right ears, and we used approximately 400 pieces of data per class. The experimental results showed that discrimination was possible with high accuracy. Accuracy, Recall, Precision, and F-measure were used as evaluation indices, and both Accuracy and F-measure were highly evaluated at 0.989. These results also indicate that the left and right ears can be discriminated against even if they are different during registration and evaluation. In the future, we are planning to study the imaging method for implementation and conduct experiments with third-party data.

Keywords: Image Processing-Personal, Authentication-biometric, identification

DOI: 10.54941/ahfe1002785

Cite this paper:

Downloads
111
Visits
251
Download