PUDI DHILLESWARARAO, SRINIVAS BOPPU, M. SABARIMALAI MANIKANDAN, LINGA REDDY CENKERAMADDI, “Efficient Hardware Architectures for Accelerating Deep Neural Networks: Survey” has been accepted for publication in IEEE Access (2022).
Keywords: Field programmable gate arrays, Computer architecture, Deep learning, AI accelerators, Hardware acceleration, Graphics processing units, Feature extraction.
Abstract: In the modern-day era of technology, a paradigm shift has been witnessed in the areas involving applications of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL). Specifically, Deep Neural Networks (DNNs) have emerged as a popular field of interest in most AI applications such as computer vision, image and video processing, robotics, etc. In the context of developed digital technologies and the availability of authentic data and data handling infrastructure, DNNs have been a credible choice for solving more complex real-life problems. The performance and accuracy of a DNN is way better than human intelligence in certain situations. However, it is noteworthy that the DNN is computationally too cumbersome in terms of the resources and time to handle these computations. Furthermore, general-purpose architectures like CPUs have issues in handling such computationally intensive algorithms. Therefore, a lot of interest and efforts have been invested by the research fraternity in specialized hardware architectures such as Graphics Processing Unit (GPU), Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), and Coarse Grained Reconfigurable Array (CGRA) in the context of effective implementation of computationally intensive algorithms. This paper brings forward the various research works on the development and deployment of DNNs using the aforementioned specialized hardware architectures and embedded AI accelerators. The review discusses the detailed description of the specialized hardware-based accelerators used in the training and/or inference of DNN. A comparative study based on factors like power, area, and throughput, is also made on the various accelerators discussed. Finally, future research and development directions, such as future trends in DNN implementation on specialized hardware accelerators, are discussed. This review article is intended to guide hardware architects to accelerate and improve the effectiveness of deep learning research.
More details: DOI: 10.1109/ACCESS.2022.3229767
Naveen Paluru, Aveen Dayal, Havard B. Jenssen, Tomas Sakinis, Linga R. Cenkeramaddi, Jaya Prakash, and Phaneendra K. Yalavarthy, “Anam-Net : Anamorphic Depth Embedding based Light-Weight CNN for Segmentation of Anomalies in COVID-19 Chest CT Images,” IEEE Transactions on Neural Networks and Learning Systems (Fast Track: COVID-19 Focused Papers) 2021 (in press).
Keywords: COVID-19, Coronavirus, Deep Learning, Segmentation, and Abnormalities.
Abstract: Chest computed tomography (CT) imaging has become indispensable for staging and managing of COVID-19, and current evaluation of anomalies/abnormalities associated with COVID-19 has been performed majorly by visual score. The development of automated methods for quantifying COVID-19 abnormalities in these CT images is invaluable to clinicians. The hallmark of COVID-19 in chest CT images is the presence of ground-glass opacities in the lung region, which are tedious to segment manually. We propose anamorphic depth embedding based light-weight CNN, called Anam-Net, to segment anomalies in COVID-19 Chest CT images. The proposed Anam-Net has 7.8 times fewer parameters compared to the state-of-the-art UNet (or its variants), making it light-weight capable of providing inferences in mobile or resource constraint (point-of-care) platforms. The results from chest CT images (test cases) across different experiments showed that the proposed method could provide good Dice similarity scores for abnormal as well as normal regions in the lung. We have benchmarked Anam-Net with other state-of-the-art architectures like ENet, LEDNet, UNet++, SegNet, Attention UNet and DeepLabV3+. The proposed AnamNet was also deployed on embedded systems like Raspberry Pi 4, NVIDIA Jetson Xavier, and mobile based Android application (CovSeg) embedded with Anam-Net to demonstrate its suitability for point-of-care platforms. The generated codes, models, and the mobile application are available for enthusiastic users at https://github.com/NaveenPaluru/Segmentation-COVID-19.
A. Dayal, N. Paluru, L. R. Cenkeramaddi, S. J., and P. K. Yalavarthy, “Design and Implementation of Deep Learning Based Contactless Authentication System Using Hand Gestures,” MDPI Electronics (Artificial Intelligence Circuits and Systems (AICAS)), vol. 10, no. 2, p. 182, Jan. 2021.
Keywords: hand gestures recognition, security, edge computing, deep learning, neural networks, contactless authentication, camera-based authentication
Abstract: Hand gestures based sign language digits have several contactless applications. Applications include communication for impaired people, such as elderly and disabled people, health-care applications, automotive user interfaces, and security and surveillance. This work presents the design and implementation of a complete end-to-end deep learning based edge computing system that can verify a user contactlessly using ‘authentication code’. The ‘authentication code’ is an ‘n’ digit numeric code and the digits are hand gestures of sign language digits. We propose a memory-efficient deep learning model to classify the hand gestures of the sign language digits. The proposed deep learning model is based on the bottleneck module which is inspired by the deep residual networks. The model achieves classification accuracy of 99.1% on the publicly available sign language digits dataset. The model is deployed on a Raspberry pi 4 Model B edge computing system to serve as an edge device for user verification. The edge computing system consists of two steps, it first takes input from the camera attached to it in real-time and stores it in the buffer. In the second step, the model classifies the digit with the inference rate of 280 ms, by taking the first image in the buffer as input.