The article titled “Light Weight Deep Convolutional Neural Network for Background Sound Classification in Speech Signals,” has been accepted for publication in The Journal of the Acoustical Society of America (JASA), 2022.

Aveen Dayal, Sreenivasa Reddy Yeduri, Balu Harshavardan Koduru, Rahul Kumar Jaiswal, Soumya J, Srinivas M.B., Om Jee Pandey and Linga Reddy Cenkeramaddi, “Light Weight Deep Convolutional Neural Network for Background Sound Classification in Speech Signals,” has been accepted for publication in The Journal of the Acoustical Society of America (JASA), 2022.

ABSTRACT:Recognizing background information in human speech signals is a task that is extremely useful in a wide range of practical applications, and many articles on background sound classification have been published. It has not, however, been addressed with background embedded in real-world human speech signals. Thus, this work proposes a lightweight deep convolutional neural network (CNN) in conjunction with spectrograms for an efficient background sound classification with practical human speech signals. The proposed model classifies 11 different background sounds such as airplane, airport, babble, car, drone, exhibition, helicopter, restaurant, station, street, and train sounds embedded in human speech signals. The proposed deep CNN model consists of four convolution layers, four max-pooling layers, and one fully connected layer. The model is tested on human speech signals with varying signal-to-noise ratios (SNRs). Based on the results, the proposed deep CNN model utilizing spectrograms achieves an overall background sound classification accuracy of 95.2% using the human speech signals with a wide range of SNRs. It is also observed that the proposed model outperforms the benchmark models in terms of both accuracy and inference time when evaluated on edge computing devices.

More details: https://doi.org/10.1121/10.0010257

Leave a Reply

Your email address will not be published. Required fields are marked *