The paper titled “Radio Frequency Spectrum Sensing by Automatic Modulation Classification in Cognitive Radio System using Multiscale Deep CNN,” has been accepted for publication in the IEEE Sensors Journal (2021).

Rajesh Reddy Yakkati, Rakesh Reddy Yakkati, Rajesh Kumar Tripathy and Linga Reddy Cenkeramaddi, “Radio Frequency Spectrum Sensing by Automatic Modulation Classification in Cognitive Radio System using Multiscale Deep CNN,” has been accepted for publication in the IEEE Sensors Journal (2021).

Keywords:Modulation,Signal to noise ratio,Binary phase shift keying,AWGN channels, Rayleigh channels,Convolutional neural networks,Feature extraction

Abstract:Automatic modulation categorization (AMC) is used in many applications such as cognitive radio, adaptive communication, electronic reconnaissance, and non-cooperative communications. Predicting the modulation class of an unknown radio signal without having any prior information of the signal parameters is challenging. This paper proposes a novel multiscale deep-learning-based approach for the automatic modulation classification using radio signals. The approach considered the fixed boundary range-based Empirical wavelet transform (FBREWT) based multiscale analysis technique to decompose the radio signal into sub-band signals or modes. The sub-band signals computed from the radio signal combined with the deep convolutional neural network (CNN) are used to classify modulation types. The approach is tested using the radio signals of different signal-to-noise ratio (SNR) values and four different channel types such as additive white Gaussian noise (AWGN) combination of Rayleigh fading and AWGN channels, the combination of Rician flat fading and AWGN channels, and combination of Nakagami-m fading and AWGN channels for AMC. The results show that the proposed FBREWT based deep-learning approach achieves an overall classification accuracy of 97% for AMC using the radio signals with 10dB SNR for the AWGN channel. Moreover, the proposed approach has obtained the accuracy’s as 94.56%, 95%, and 97.33% using radio signals with 10 dB SNR values for Rayleigh fading, Rician flat fading, and Nakagami-m fading channels combined through AWGN link. The comparison of the proposed multiscale deep learning-based approach with existing methods is shown for AMC.

More details:DOI: 10.1109/JSEN.2021.3128395

Two papers are accepted in IEEE iSES 2021.

1.Rakesh Reddy Yakkati, Sreenivasa Reddy Yeduri, Linga Reddy Cenkeramaddi, “Hand Gesture Classification Using Grayscale Thermal Images and Convolution Neural Network,” has been accepted in IEEE International Symposium on Smart Electronic Systems, 2021.

Keywords:Convolution neural network, image classification, hand gesture, classification accuracy, and inference time.

Abstract:In this paper, we propose a convolution neural network for classifying grayscale images of hand gestures. For classification, we look at ten different hand gestures collected
from various people using a thermal camera. We then compare the proposed model’s performance in terms of classification accuracy and inference time to that of other benchmark models.We demonstrate through extensive results that the proposed
model achieves higher classification accuracy while using a smaller model size. Furthermore, we show that the proposed model outperforms benchmark models in terms of inference time.

2.Muralidhar Reddy Challa, Abhinav Kumar and Linga Reddy Cenkeramaddi, “Face Recognition Using mmWave RADAR Imaging,” has been accepted in IEEE International Symposium on Smart Electronic Systems, 2021.

Keywords:Machine learning algorithms,Convolution,Face recognition, Signal processing algorithms,Radar imaging,Feature extraction,Power systems

Abstract:The current work presents a novel approach to signal processing and face recognition based on 60 GHz mmWave RADAR imaging. Machine learning algorithms such as Convolutional Auto-Encoder and Random Forest algorithm are employed to implement the face recognition scheme. The work presents an approach towards computing and processing higher dimensional RADAR imaging information through extreme feature extraction followed by simple Random Forest, thus enabling a computationally inexpensive algorithm for a mobile friendly implementation.

More details:DOI: 10.1109/iSES52644.2021.00081

The paper titled, “Robust Hand Gestures Recognition using a Deep CNN and Thermal Images,” has been accepted for publication in the IEEE Sensors Journal.

Daniel S. Breland, Aveen Dayal, Ajit Jha, Phaneendra K. Yalavarthy, Om J. Pandey, and Linga R. Cenkeramaddi, “Robust Hand Gestures Recognition using a Deep CNN and Thermal Images,” IEEE Sensors Journal 2021

Keywords:Cameras,Gesture recognition,Thermal sensors,Image resolution, Sensors, Imaging, Image sensors

Abstract:Medical systems and assistive technologies, human-computer interaction, human-robot interaction, industrial automation, virtual environment control, sign language translation, crisis and disaster management, entertainment and computer games, and so on all use RGB cameras for hand gesture recognition. However, their performance is limited especially in low-light conditions. In this paper, we propose a robust hand gesture recognition system based on high-resolution thermal imaging that is light-independent. A dataset of 14,400 thermal hand gestures is constructed, separated into two color tones. We also propose using a deep CNN to classify high-resolution hand gestures accurately. The proposed models were also tested on Raspberry Pi 4 and Nvidia AGX edge computing devices, and the results were compared to the benchmark models. The model also achieves an accuracy of 98.81% and an inference time of 75.138 ms on Nvidia Jetson AGX. In contrast to hand gesture recognition systems based on RGB cameras, which have limited performance in the dark-light conditions, the proposed system based on reliable high resolution thermal images is well-suited to a wide range of applications.

More details:DOI: 10.1109/JSEN.2021.3119977

The article titled, “Hybrid BLE/LTE/Wi-Fi/LoRa Switching Scheme for UAV-Assisted Wireless Networks,” has been accepted for publication in IEEE ANTS 2021, 13 – 16 December, Hyderabad, India.

Wilson A N, Y. Sreenivasa Reddy, Ajit Jha, Abhinav Kumar, Linga Reddy Cenkeramaddi, “Hybrid BLE/LTE/Wi-Fi/LoRa Switching Scheme for UAV-Assisted Wireless Networks,” has been accepted for publication in IEEE ANTS 2021, 13 – 16 December, Hyderabad, India.

Keywords:Energy consumption,Protocols,Wireless networks,Switches,Autonomous aerial vehicles,Throughput,Communications technology

Abstract:The unmanned aerial vehicles are deployed in multiple layers to monitor an area and report the information to the ground control station. When we use a single communication protocol such as Bluetooth Low Energy (BLE)/Wi-Fi with low range, the data has to pass through multiple hops for data transfer. This in turn, increases the delay for data transmission. Even though LoRa protocol supports longer distances, the delay is more due to the limited bandwidth. Thus, in this work, we propose a hybrid BLE/LTE/Wi-Fi/LoRa switching scheme that consumes lower energy in addition to reducing the average delay in the network. The proposed scheme switches between the communication technologies based on the lower energy consumption. The performance of the proposed hybrid switching scheme is compared with the individual communication protocols in terms of both energy consumption and average delay. Through extensive numerical results, we show that the proposed hybrid switching scheme performs better in comparison to the individual communication technologies.

More details:DOI: 10.1109/ANTS52808.2021.9936962

The Best Master’s Thesis in Information and Communication Technologies – 2021

The thesis titled “Hand Gestures Recognition using Thermal Images” done by the master student, Daniel Skomedal Breland under the supervision Prof. Linga Reddy Cenkeramaddi has been awarded the best master thesis in ICT for the year 2021.

Hand Gestures Recognition using Thermal Imaging.

The goal of this project is to develop a robust and reliable hand gesture recognition system using a thermal camera. Hand gestures are an important communication tool for many practical scenarios. It is used in a variety of applications, including medical, entertainment, and industrial settings. The use of human-robot interactions is growing, and there exists several methods. It is possible to gain access to tight and harsh places by using gestures. The majority of gesture recognition is done with RGB cameras, which has the disadvantage of not being able to recognize gestures in low-light situations. Thermal cameras can operate in low-light environments because they are unaffected by external light.

Article titled “A Velocity Estimation Technique using Camera for the Targets in the out of Field of View (FoV) of mmWaveFMCW Radars” has been accepted for publication in MDPI Electronics.

Arav Pandya, Ajit Jha, Linga Reddy Cenkeramaddi *, “A Velocity Estimation Technique using Camera for the Targets in the out of Field of View (FoV) of mmWaveFMCW Radars” has been accepted for publication in MDPI Electronics, 2021

Keywords: velocity estimation; optical flow; monocular camera; autonomous systems; mmWave radar

Abstract: Perception in terms of object detection, classification, and dynamic estimation (position and velocity) are fundamental functionalities that autonomous agents (unmanned ground vehicles,unmanned aerial vehicles, or robots) have to navigate safely and autonomously. To date, various sensors have been used individually or in combination to achieve this goal. In this paper, we present a novel method for leveraging millimeter wave radar’s (mmW radar’s) ability to accurately measure position and velocity in order to improve and optimize velocity estimation using a monocular camera (using optical flow) and machine learning techniques. The proposed method eliminates ambiguity in optical flow velocity estimation when the object of interest is at the edge of the frame or far away from the camera without requiring camera–radar calibration. Moreover, algorithms of various complexity were implemented using custom dataset, and each of them successfully detected the object and estimated its velocity accurately and independently of the object’s distance and location in frame. Here, we present a complete implementation of camera–mmW radar late feature fusion to improve the camera’s velocity estimation performance. It includes setup design, data acquisition, dataset development, and finally, implementing a lightweight ML model that successfully maps the mmW radar features to the camera, allowing it to perceive and estimate the dynamics of a target object without any calibration.

More details:https://doi.org/10.3390/electronics10192397

The article titled, “RAMAN: Reinforcement learning inspired algorithm for mapping applications onto mesh Network-on-Chip” has been accepted for publication at 23rd ACM/IEEE International Workshop on System-Level Interconnect Pathfinding (SLIP), 2021.

Jitesh Choudhary, Soumya J, and Linga Reddy Cenkeramaddi, “RAMAN: Reinforcement learning inspired algorithm for mapping applications onto mesh Network-on-Chip” has been accepted for publication at the 23rd ACM/IEEE International Workshop on System-Level Interconnect Pathfinding (SLIP), 2021.

Keywords:Costs,Q-learning,Machine learning algorithms, Scalability, Conferences, Network-on-chip,Integer linear programming

Abstract:Application Mapping in Network-on-Chip (NoC) design is considered a vital challenge because of its NP-hard nature. Many efforts are made to address the application mapping problem, but none has satisfied all the requirements. For example, Integer Linear Programming (ILP) has achieved the best possible solution but lacks scalability. Advancements in Machine Learning (ML) have added new dimensions in solving the application mapping problem. This paper proposes RAMAN: Reinforcement Learning (RL) inspired algorithm for mapping applications onto mesh NoC. RAMAN is a modified Q-Learning technique inspired by RL, aiming to achieve the minimum communication cost for the application mapping problem. The results of RAMAN demonstrated that RL has enormous potential to solve application mapping problem without much complexity and computational cost. RAMAN has achieved the communication cost within the 6% of the optimal cost determined by ILP. Considering the computational overheads and complexity, the results of RAMAN are encouraging. Future work will improve RAMAN’s performance and provide a new aspect to solve the application mapping problem.

More details:DOI: 10.1109/SLIP52707.2021.00019

The manuscript entitled “Joint Resource Allocation and UAV Scheduling with Ground Radio Station Sleeping” has been accepted for publication in IEEE Access.

AKHILESWAR CHOWDARY (Student Member, IEEE), YOGHITHA RAMAMOORTHI (Member, IEEE), ABHINAV KUMAR (Senior Member, IEEE), AND LINGA REDDY
CENKERAMADDI (Senior Member, IEEE),” Joint Resource Allocation and UAV Scheduling with Ground Radio Station Sleeping,” IEEE Access, 2021.

Keywords:Unmanned aerial vehicles,NOMA,Signal to noise ratio, Interference, Throughput, Quality of service, Resource management

Abstract: Applications of Unmanned aerial vehicles (UAVs) have advanced rapidly in recent years. The UAVs are used for a variety of applications, including surveillance, disaster management, precision agriculture, weather forecasting, etc. In near future, the growing number of UAV applications would necessitate densification of UAV infrastructure (ground radio station (GRS) and ground control station (GCS)) at the expense of increased energy consumption for UAV communications. Maximizing the energy efficiency of this UAV infrastructure is important. Motivated by this, we propose joint resource allocation and UAV scheduling with GRS sleeping (GRSS). Further, we propose the use of coordinated multi-point (CoMP) with joint transmission (JT) and non-orthogonal multiple access (NOMA) along with GRSS to increase the coverage and data rates, respectively. Through exhaustive simulation results, we show that the proposed CoMP along with GRSS results in up to 10% higher energy savings and 24% increase in coverage. Further, NOMA along with GRSS results in up to 9% enhancement in throughput of the system.

More details:DOI: 10.1109/ACCESS.2021.3111087

The paper titled, “Point Cloud Instance Segmentation for Automatic Electric Vehicle Battery Disassembly” has been accepted for publication in Intelligent Technologies and Applications: 4th International Conference, INTAP 2021.

Henrik Bradland, Martin Choux and Linga Reddy Cenkeramaddi, “Point Cloud Instance Segmentation for Automatic Electric Vehicle Battery Disassembly”, Intelligent Technologies and Applications: 4th International Conference, INTAP 2021.

Keywords:Graph CNN,Part segmentation,Large point clouds,Structured-light camera

Abstract:This paper describes a novel design based on recent 3D perception methods for capturing point clouds and segmenting instances of cabling found on electric vehicle battery packs. The use of cutting-edge perception algorithm architectures, such as graph-based and voxel-based convolution, in industrial autonomous lithium-ion battery pack disassembly is investigated. The proposed approach focuses on the challenge of getting a desirable representation of any battery pack using an industrial robot in conjunction with a high-end structured light camera, with “end-to-end” and “model-free” as design constraints. The proposed design employs self-captured datasets comprised of several battery packs that have been captured and labeled. Following that, the datasets are used to create a perception system. Based on the results, graph-based deep-learning algorithms have been shown to be capable of being scaled up to 50, 000 inputs while still exhibiting strong performance in terms of accuracy and processing time. The results show that an instance segmenting system can be implemented in less than two seconds. Using off-the-shelf hardware, we demonstrate that a 3D perception system is industrially viable and competitive as compared to a 2D perception system (The different algorithms studied in this article are implemented in Python and can be obtained through the following link: https://github.com/HenrikBradland-Nor/intap21).

More details:DOI: 10.1007/978-3-031-10525-8_20

The paper titled, “Classification of Targets using Statistical Features from Range FFT of mmWave FMCW Radars”, has been accepted for publication in the MDPI Electronics (Artificial Intelligence Circuits and Systems (AICAS)), 2021.

Jyoti Bhatia, Aveen Dayal, Ajit Jha, Santosh Kumar Vishvakarma, Soumya J., Srinivas M. B., Phaneendra K. Yalavarthy, Abhinav Kumar, V. Lalitha, Sagar Koorapati, and Linga Reddy Cenkeramaddi, “Classification of Targets using Statistical Features from Range FFT of mmWave FMCW Radars”, has been accepted for publication in the MDPI Electronics (Artificial Intelligence Circuits and Systems (AICAS)), 2021.

Keywords:mmWave radar,FMCW radar, Autonomous systems,Machine learning,Ground station radar,Targets classification,Range FFT features

Abstract:Radars with mmWave frequency modulated continuous wave (FMCW) technology accurately estimate the range and velocity of targets in their field of view (FoV). The targeted angle of arrival (AoA) estimation can be improved by increasing receiving antennas or by using multiple-input multiple-output (MIMO). However, obtaining target features such as target type remains challenging. In this paper, we present a novel target classification method based on machine learning and features extracted from a range fast Fourier transform (FFT) profile by using mmWave FMCW radars operating in the frequency range of 77–81 GHz. The measurements are carried out in a variety of realistic situations, including pedestrian, automotive, and unmanned aerial vehicle (UAV) (also known as drone). Peak, width, area, variance, and range are collected from range FFT profile peaks and fed into a machine learning model. In order to evaluate the performance, various light weight classification machine learning models such as logistic regression, Naive Bayes, support vector machine (SVM), and lightweight gradient boosting machine (GBM) are used. We demonstrate our findings by using outdoor measurements and achieve a classification accuracy of 95.6% by using LightGBM. The proposed method will be extremely useful in a wide range of applications, including cost-effective and dependable ground station traffic management and control systems for autonomous operations, and advanced driver-assistance systems (ADAS). The presented classification technique extends the potential of mmWave FMCW radar beyond the detection of range, velocity, and AoA to classification. mmWave FMCW radars will be more robust in computer vision, visual perception, and fully autonomous ground control and traffic management cyber-physical systems as a result of the added new feature.

More details:https://doi.org/10.3390/electronics10161965