This project is to use Python and Mozilla’s DeepSpeech ASR (automatic speech recognition) engine on different platforms (such as Raspberry Pi 4 - 1 GB, Nvidia Jetson Nano, Windows PC, and Linux PC, Samsung Galaxy A50, Huawei P20) in order to develop a refined ASR engine for English and Chinese (Mandarin) languages with following functionalities, with source code, instruction, data and an API documentation delivered after development complete. The architecture of Deep Speech is an end-to-end trainable, character-level, deep recurrent neural network (RNN). It is a deep neural network with recurrent layers that gets audio features as input and outputs characters directly — the transcription of the audio, and uses LSTM (Long short-term memory) cells instead of GRU (gated recurrent unit) cells. This project targets <6% of Word Error Rate, and especially for key phrases and key words to <3% WER, close to human level performance for English and Chinese languages.
1. Use the latest Mozilla’s DeepSpeech ASR engine which comes with .tflite model (TensorFlow Lite), faster than real-time on a single core of a Raspberry Pi 4, and able to make our own audio transcription application with hot word detection function.
2. Generate Confidence Scores of high accuracy level. Be able to retrieve result confidence scores for each English word and for each English sentence in a transcription of audio into text (confidence score for both word and sentence level), and they should be highly reliable.
3. Support Keyword Spotting modes that can recognize in a continuous stream. User can configure a list of key phrases to search for and specify the detection threshold for each of them. This mode should reliably work in continuous speech stream and can be used for keyword activation. Equivalent to pocketsphinx -kws and –keyphrase options (The methods are ps_set_keyphrase and ps_set_kws).
4. Recognize accurately at least a thousand of commands and controls that user can define in a simple text editor in keyword spotting mode or keyword activation mode. (same as No. 3 requirement, with 3% WER)
5. Mozilla’s DeepSpeech ASR word error rate on LibriSpeech’s test-clean set is 6.5%, which we target to improve to 6% by this project, and for key phrases and key words to <3% WER, close to human level performance.
6. Use automatic phoneme alignment, VAD and other methods to detect the start time, the end time and position for each recognized phoneme, word and sentence, with an output data structure readable at real-time for the complete set of phoneme position information. Forced alignment refers to the process by which orthographic transcriptions are aligned to audio recordings to automatically generate phone level segmentation. As described at the following link：[login to view URL]。
When developing an ASR system, “good initial estimates … are essential” when training Gaussian Mixture Model (GMM) parameters (Rabiner and Juang, 1993, p. 370). Phoneme location information is also critical when building concatenative text-to-speech systems.
7. Implement CTC decoder as an important optimization: integrating appropriate language model into the decoder.
Hello! I'd like to deliver asr engine for mobile platform. I'm familiar with CTC loss, Speech recognition, DeepSpeech, pytorch, tflite. I'll do the job blazingly fast. Please, give me a try!