LI NAN's Projects
A minimum unofficial implementation of the "A Convolutional Recurrent Neural Network for Real-Time Speech Enhancement" (CRN) using PyTorch
AcadHomepage: A Modern and Responsive Academic Personal Homepage
Data augment. Add reverb and noise in speech.
speech enhancement\speech seperation\sound source localization
ICA_NMF_JADE
A PyTorch implementation of Conv-TasNet
K236 task
Speech Localization and Separation using DNNs
End-to-End Speech Processing Toolkit
Speech Enhancement using KF
tju_12
Config files for my GitHub profile.
DOA, VAD and KWS for ReSpeaker Microphone Array
Different implementations of "Weighted Prediction Error" for speech dereverberation
pytorch-dialect-speech-classification
Robust Speech Recognition Using Generative Adversarial Networks (GAN)
Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
TensorFlow 1.4.0 installed version.
DNN and RCED speech enhancement
基于深度学习的语音增强、去混响
天大博士/硕士学位论文Latex模板,根据2021年版要求修改,可直接在Overleaf上运行。:star:所写的论文成功提交天津大学图书馆存档!(2021.12.24)
This repo summarizes the tutorials, datasets, papers, codes and tools for speech separation and speaker extraction task. You are kindly invited to pull requests.
A simple VAD method
Voice activity detection (VAD) paper and code(From 198*~ )and its classification.
Implement Wave-U-Net by PyTorch, and migrate it to the speech enhancement.