The article titled, Hardware Acceleration of a CNN-based Automatic Modulation Classifier has been accepted for publication in the 2023 Southern Conference on Programmable Logic- SPL2023.

Sravanth Chebrolu, Srinivas Boppu and Linga Reddy Cenkeramaddi, “Hardware Acceleration of a CNN-based Automatic Modulation Classifier,” has been accepted for publication in the 2023 Southern Conference on Programmable Logic- SPL2023.

Abstract: Automatic modulation classification (AMC) has found its place in numerous applications, ranging from cognitive radio and adaptive communication to electronic reconnaissance and spectrum interference detection. Several attempts have been made to develop a high-accuracy modulation classifier using machine learning based convolutional neural networks (CNNs). This paper considers one such model, which uses a fixed boundary range empirical wavelet transform and deep CNN, and accelerates the model on the ZCU104 FPGA board to achieve fast classification times. The proposed accelerator can achieve a maximum classification accuracy of 96% for +8 dB signal-to-noise ratio (SNR) radio signals. Compared to similar works, the accelerator performs reasonably well for low SNR ratios (≤ +6 dB). Furthermore, the model is implemented on an edge CPU device (Raspberry Pi), and our accelerator is 50× faster than the CPU implementation. Our design achieves a reasonable throughput of 1.8K classifications/sec and a classification time of 550 µs per sample.

Keywords: Modulation Classification, Hardware Acceleration, Deep Learning, Convolutional Neural Networks, Vitis AI

The article, “Hand Gestures Recognition using Edge Computing System based on Vision Transformer and Lightweight CNN,” has been accepted for publication in the Journal of Ambient Intelligence and Humanized Computing (2022).

Khushi Gupta, Arshdeep Singh, Sreenivasa Reddy Yeduri, M B Srinivas, Linga Reddy Cenkeramaddi, “Hand Gestures Recognition using Edge Computing System based on Vision Transformer and Lightweight CNN,” has been accepted for publication in the Journal of Ambient Intelligence and Humanized Computing (2022).

Keywords: Hand gesture recognition, NUS Hand Posture Dataset I, Turkey Ankara Ayrancı Anadolu High School’s Sign Language Digits Dataset, American Sign Language dataset, Vision transformer, Convolutional Neural Network, Edge computing device, Raspberry Pi, MobileNet, ResNet, VGGNet

Abstract: Human computer interaction, human-robot interaction, robotics, healthcare systems, health assistive technologies, automotive user interfaces, crisis management, disaster relief, entertainment, and contactless communication in smart devices are just a few of the practical applications for hand gesture recognition. In this work, we propose two novel machine learning models for hand gesture recognition using three publicly available datasets, NUS Hand Posture Dataset I, Turkey Ankara Ayrancı Anadolu High School’s Sign Language Digits Dataset, and American Sign Language dataset. The developed models based on vision transformer and lightweight Convolutional Neural Networks are deployed on an end-to-end edge computing system that can accurately provide the classification of hand gestures. The edge computing system presented here utilizes Raspberry Pi. The designed models achieve an accuracy of 90–99% on the test datasets. The performance of the proposed models is also compared to the pre-trained models such as MobileNet, ResNet, and VGGNet.

More details:DOI: https://doi.org/10.1007/s12652-022-04506-4