A. Dayal, N. Paluru, L. R. Cenkeramaddi, S. J., and P. K. Yalavarthy, “Design and Implementation of Deep Learning Based Contactless Authentication System Using Hand Gestures,” MDPI Electronics (Artificial Intelligence Circuits and Systems (AICAS)), vol. 10, no. 2, p. 182, Jan. 2021.
Keywords: hand gestures recognition, security, edge computing, deep learning, neural networks, contactless authentication, camera-based authentication
Abstract: Hand gestures based sign language digits have several contactless applications. Applications include communication for impaired people, such as elderly and disabled people, health-care applications, automotive user interfaces, and security and surveillance. This work presents the design and implementation of a complete end-to-end deep learning based edge computing system that can verify a user contactlessly using ‘authentication code’. The ‘authentication code’ is an ‘n’ digit numeric code and the digits are hand gestures of sign language digits. We propose a memory-efficient deep learning model to classify the hand gestures of the sign language digits. The proposed deep learning model is based on the bottleneck module which is inspired by the deep residual networks. The model achieves classification accuracy of 99.1% on the publicly available sign language digits dataset. The model is deployed on a Raspberry pi 4 Model B edge computing system to serve as an edge device for user verification. The edge computing system consists of two steps, it first takes input from the camera attached to it in real-time and stores it in the buffer. In the second step, the model classifies the digit with the inference rate of 280 ms, by taking the first image in the buffer as input.