Speech Enhancement With Improved Deep Learning Methods

Download Speech Enhancement With Improved Deep Learning Methods full books in PDF, epub, and Kindle. Read online free Speech Enhancement With Improved Deep Learning Methods ebook anywhere anytime directly on your device. Fast Download speed and no annoying ads. We cannot guarantee that every ebooks is available!

Speech Enhancement with Improved Deep Learning Methods

Speech Enhancement with Improved Deep Learning Methods
Author :
Publisher :
Total Pages : 0
Release :
ISBN-10 : OCLC:1393267167
ISBN-13 :
Rating : 4/5 ( Downloads)

Book Synopsis Speech Enhancement with Improved Deep Learning Methods by : Mojtaba Hasannezhad

Download or read book Speech Enhancement with Improved Deep Learning Methods written by Mojtaba Hasannezhad and published by . This book was released on 2021 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: In real-world environments, speech signals are often corrupted by ambient noises during their acquisition, leading to degradation of quality and intelligibility of the speech for a listener. As one of the central topics in the speech processing area, speech enhancement aims to recover clean speech from such a noisy mixture. Many traditional speech enhancement methods designed based on statistical signal processing have been proposed and widely used in the past. However, the performance of these methods was limited and thus failed in sophisticated acoustic scenarios. Over the last decade, deep learning as a primary tool to develop data-driven information systems has led to revolutionary advances in speech enhancement. In this context, speech enhancement is treated as a supervised learning problem, which does not suffer from issues faced by traditional methods. This supervised learning problem has three main components: input features, learning machine, and training target. In this thesis, various deep learning architectures and methods are developed to deal with the current limitations of these three components. First, we propose a serial hybrid neural network model integrating a new low-complexity fully-convolutional convolutional neural network (CNN) and a long short-term memory (LSTM) network to estimate a phase-sensitive mask for speech enhancement. Instead of using traditional acoustic features as the input of the model, a CNN is employed to automatically extract sophisticated speech features that can maximize the performance of a model. Then, an LSTM network is chosen as the learning machine to model strong temporal dynamics of speech. The model is designed to take full advantage of the temporal dependencies and spectral correlations present in the input speech signal while keeping the model complexity low. Also, an attention technique is embedded to recalibrate the useful CNN-extracted features adaptively. Through extensive comparative experiments, we show that the proposed model significantly outperforms some known neural network-based speech enhancement methods in the presence of highly non-stationary noises, while it exhibits a relatively small number of model parameters compared to some commonly employed DNN-based methods. Most of the available approaches for speech enhancement using deep neural networks face a number of limitations: they do not exploit the information contained in the phase spectrum, while their high computational complexity and memory requirements make them unsuited for real-time applications. Hence, a new phase-aware composite deep neural network is proposed to address these challenges. Specifically, magnitude processing with spectral mask and phase reconstruction using phase derivative are proposed as key subtasks of the new network to simultaneously enhance the magnitude and phase spectra. Besides, the neural network is meticulously designed to take advantage of strong temporal and spectral dependencies of speech, while its components perform independently and in parallel to speed up the computation. The advantages of the proposed PACDNN model over some well-known DNN-based SE methods are demonstrated through extensive comparative experiments. Considering that some acoustic scenarios could be better handled using a number of low-complexity sub-DNNs, each specifically designed to perform a particular task, we propose another very low complexity and fully convolutional framework, performing speech enhancement in short-time modified discrete cosine transform (STMDCT) domain. This framework is made up of two main stages: classification and mapping. In the former stage, a CNN-based network is proposed to classify the input speech based on its utterance-level attributes, i.e., signal-to-noise ratio and gender. In the latter stage, four well-trained CNNs specialized for different specific and simple tasks transform the STMDCT of noisy input speech to the clean one. Since this framework is designed to perform in the STMDCT domain, there is no need to deal with the phase information, i.e., no phase-related computation is required. Moreover, the training target length is only one-half of those in the previous chapters, leading to lower computational complexity and less demand for the mapping CNNs. Although there are multiple branches in the model, only one of the expert CNNs is active for each time, i.e., the computational burden is related only to a single branch at anytime. Also, the mapping CNNs are fully convolutional, and their computations are performed in parallel, thus reducing the computational time. Moreover, this proposed framework reduces the latency by %55 compared to the models in the previous chapters. Through extensive experimental studies, it is shown that the MBSE framework not only gives a superior speech enhancement performance but also has a lower complexity compared to some existing deep learning-based methods.


Speech Enhancement with Improved Deep Learning Methods Related Books

Speech Enhancement with Improved Deep Learning Methods
Language: en
Pages: 0
Authors: Mojtaba Hasannezhad
Categories:
Type: BOOK - Published: 2021 - Publisher:

DOWNLOAD EBOOK

In real-world environments, speech signals are often corrupted by ambient noises during their acquisition, leading to degradation of quality and intelligibility
New Era for Robust Speech Recognition
Language: en
Pages: 433
Authors: Shinji Watanabe
Categories: Computers
Type: BOOK - Published: 2017-10-30 - Publisher: Springer

DOWNLOAD EBOOK

This book covers the state-of-the-art in deep neural-network-based methods for noise robustness in distant speech recognition applications. It provides insights
Single-Channel Speech Enhancement Based on Deep Neural Networks
Language: en
Pages: 0
Authors: Zhiheng Ouyang
Categories:
Type: BOOK - Published: 2020 - Publisher:

DOWNLOAD EBOOK

Speech enhancement (SE) aims to improve the speech quality of the degraded speech. Recently, researchers have resorted to deep-learning as a primary tool for sp
Speech Enhancement
Language: en
Pages: 715
Authors: Philipos C. Loizou
Categories: Technology & Engineering
Type: BOOK - Published: 2013-02-25 - Publisher: CRC Press

DOWNLOAD EBOOK

With the proliferation of mobile devices and hearing devices, including hearing aids and cochlear implants, there is a growing and pressing need to design algor
Speech Enhancement in the STFT Domain
Language: en
Pages: 112
Authors: Jacob Benesty
Categories: Technology & Engineering
Type: BOOK - Published: 2011-09-18 - Publisher: Springer Science & Business Media

DOWNLOAD EBOOK

This work addresses this problem in the short-time Fourier transform (STFT) domain. We divide the general problem into five basic categories depending on the nu