Pythaiasr
Python Thai Automatic Speech Recognition
Install / Use
/learn @PyThaiNLP/PythaiasrREADME
PyThaiASR
Python Thai Automatic Speech Recognition
<a href="https://pypi.python.org/pypi/pythaiasr"><img alt="pypi" src="https://img.shields.io/pypi/v/pythaiasr.svg"/></a><a href="https://opensource.org/licenses/Apache-2.0"><img alt="License" src="https://img.shields.io/badge/License-Apache%202.0-blue.svg"/></a><a href="https://pepy.tech/project/pythaiasr"><img alt="Download" src="https://pepy.tech/badge/pythaiasr/month"/></a>
PyThaiASR is a Python package for Automatic Speech Recognition with focus on Thai language. It have offline thai automatic speech recognition model.
License: Apache-2.0 License
Google Colab: Link Google colab
Model homepage: https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th
Install
pip install pythaiasr
For Wav2Vec2 with language model: if you want to use wannaphong/wav2vec2-large-xlsr-53-th-cv8-* model with language model, you needs to install by the step.
pip install pythaiasr[lm]
pip install https://github.com/kpu/kenlm/archive/refs/heads/master.zip
For live audio streaming: If you want to use live audio streaming from microphone/soundcard, you need to install PyAudio:
pip install pythaiasr[stream]
Usage
File-based ASR
from pythaiasr import asr
file = "a.wav"
print(asr(file))
Live Audio Streaming
Stream audio directly from your microphone/soundcard:
from pythaiasr import stream_asr
# Stream audio and print transcriptions in real-time
for transcription in stream_asr(chunk_duration=5.0):
print(transcription)
# Press Ctrl+C to stop
API
asr
asr(data: str, model: str = _model_name, lm: bool=False, device: str=None, sampling_rate: int=16_000)
- data: path of sound file or numpy array of the voice
- model: The ASR model
- lm: Use language model (except airesearch/wav2vec2-large-xlsr-53-th model)
- device: device
- sampling_rate: The sample rate
- return: thai text from ASR
stream_asr
stream_asr(model: str = _model_name, lm: bool=False, device: str=None, chunk_duration: float=5.0, sampling_rate: int=16_000)
- model: The ASR model
- lm: Use language model (except airesearch/wav2vec2-large-xlsr-53-th model)
- device: device
- chunk_duration: Duration of each audio chunk in seconds (default: 5.0)
- sampling_rate: The sample rate (default: 16000)
- yield: Thai text transcription from each audio chunk
Options for model
- airesearch/wav2vec2-large-xlsr-53-th (default) - AI RESEARCH - PyThaiNLP model
- wannaphong/wav2vec2-large-xlsr-53-th-cv8-newmm - Thai Wav2Vec2 with CommonVoice V8 (newmm tokenizer)
- wannaphong/wav2vec2-large-xlsr-53-th-cv8-deepcut - Thai Wav2Vec2 with CommonVoice V8 (deepcut tokenizer)
- biodatlab/whisper-small-th-combined - Thai Whisper small model
- biodatlab/whisper-th-medium-combined - Thai Whisper medium model
- biodatlab/whisper-th-large-combined - Thai Whisper large model
You can read about models from the list:
- airesearch/wav2vec2-large-xlsr-53-th - AI RESEARCH - PyThaiNLP model
- annaphong/wav2vec2-large-xlsr-53-th-cv8-newmm - Thai Wav2Vec2 with CommonVoice V8 (newmm tokenizer) + language model
- wannaphong/wav2vec2-large-xlsr-53-th-cv8-deepcut - Thai Wav2Vec2 with CommonVoice V8 (deepcut tokenizer) + language model
- biodatlab/whisper-small-th-combined - Thai Whisper small model
- biodatlab/whisper-th-medium-combined - Thai Whisper medium model
- biodatlab/whisper-th-large-combined - Thai Whisper large model
Docker
To use this inside of Docker do the following:
docker build -t <Your Tag name> .
docker run docker run --entrypoint /bin/bash -it <Your Tag name>
You will then get access to a interactive shell environment where you can use python with all packages installed.
