GithubHelp home page GithubHelp logo

p4thakur / stable-ts Goto Github PK

View Code? Open in Web Editor NEW

This project forked from jianfch/stable-ts

0.0 0.0 0.0 1.54 MB

ASR with reliable word-level timestamps using OpenAI's Whisper

License: MIT License

Python 100.00%

stable-ts's Introduction

Stabilizing Timestamps for Whisper

This script modifies OpenAI's Whisper to produce more reliable timestamps.

a.mp4

Setup

pip install -U stable-ts

To install the latest commit:

pip install -U git+https://github.com/jianfch/stable-ts.git

Usage

The following is a list of CLI usages each followed by a corresponding Python usage (if there is one).

Transcribe

stable-ts audio.mp3 -o audio.srt
import stable_whisper
model = stable_whisper.load_model('base')
result = model.transcribe('audio.mp3')
result.to_srt_vtt('audio.srt')

Parameters: load_model(), transcribe()

Output

Stable-ts supports various text output formats.

result.to_srt_vtt('audio.srt') #SRT
result.to_srt_vtt('audio.vtt') #VTT
result.to_ass('audio.ass') #ASS
result.to_tsv('audio.tsv') #TSV

Parameters: to_srt_vtt(), to_ass(), to_tsv()

There are word-level and segment-level timestamps. All output formats support them. They also support will both levels simultaneously except TSV. By default, segment_level and word_level are both True for all the formats that support both simultaneously.

Examples in VTT.

Default: segment_level=True + word_level=True or --segment_level true + --word_level true for CLI

00:00:07.760 --> 00:00:09.900
But<00:00:07.860> when<00:00:08.040> you<00:00:08.280> arrived<00:00:08.580> at<00:00:08.800> that<00:00:09.000> distant<00:00:09.400> world,

segment_level=True + word_level=False (Note: segment_level=True is default)

00:00:07.760 --> 00:00:09.900
But when you arrived at that distant world,

segment_level=False + word_level=True (Note: word_level=True is default)

00:00:07.760 --> 00:00:07.860
But

00:00:07.860 --> 00:00:08.040
when

00:00:08.040 --> 00:00:08.280
you

00:00:08.280 --> 00:00:08.580
arrived

...

JSON

The result can also be saved as a JSON file to preserve all the data for future reprocessing. This is useful for testing different sets of postprocessing arguments without the need to redo inference.

stable-ts audio.mp3 -o audio.json
# Save result as JSON:
result.save_as_json('audio.json')

Processing JSON file of the results into SRT.

stable-ts audio.json -o audio.srt
# Load the result:
result = stable_whisper.WhisperResult('audio.json')
result.to_srt_vtt('audio.srt')

Regrouping Words

Stable-ts has a preset for regrouping words into different segments with more natural boundaries. This preset is enabled by regroup=True (default). But there are other built-in regrouping methods that allow you to customize the regrouping logic. This preset is just a predefined combination of those methods.

xata.mp4
result0 = model.transcribe('audio.mp3', regroup=True) # regroup is True by default
# regroup=True is same as below
result1 = model.transcribe('audio.mp3', regroup=False)
(
    result1
    .split_by_punctuation([('.', ' '), '。', '?', '?', ',', ','])
    .split_by_gap(.5)
    .merge_by_gap(.15, max_words=3)
    .split_by_punctuation([('.', ' '), '。', '?', '?'])
)
# result0 == result1

Regrouping Methods

Locating Words

You can locate words with regular expression.

# Find every sentence that contains "and"
matches = result.find(r'[^.]+and[^.]+\.')
# print the all matches if there are any
for match in matches:
  print(f'match: {match.text_match}\n'
        f'text: {match.text}\n'
        f'start: {match.start}\n'
        f'end: {match.end}\n')
  
# Find the word before and after "and" in the matches
matches = matches.find(r'\s\S+\sand\s\S+')
for match in matches:
  print(f'match: {match.text_match}\n'
        f'text: {match.text}\n'
        f'start: {match.start}\n'
        f'end: {match.end}\n')

Parameters: find()

Boosting Performance

  • One of the methods that Stable-ts uses to increase timestamp accuracy and reduce hallucinations is silence suppression, enabled with suppress_silence=True (default). This method essentially suppresses the timestamps where the audio is silent or contain no speech by suppressing the corresponding tokens during inference and also readjusting the timestamps after inference. To figure out which parts of the audio track are silent or contain no speech, Stable-ts supports non-VAD and VAD methods. The default is vad=False. The VAD option uses Silero VAD (requires PyTorch 1.12.0+). See Visualizing Suppression.
  • The other method, enabled with demucs=True, uses Demucs to isolate speech from the rest of the audio track. Generally best used in conjunction with silence suppression. Although Demucs is for music, it is also effective at isolating speech even if the track contains no music.

Visualizing Suppression

You can visualize which parts of the audio will likely be suppressed (i.e. marked as silent). Requires: Pillow or opencv-python.

Without VAD

import stable_whisper
# regions on the waveform colored red are where it will likely be suppressed and marked as silent
# [q_levels]=20 and [k_size]=5 (default)
stable_whisper.visualize_suppression('audio.mp3', 'image.png', q_levels=20, k_size = 5) 

novad

# [vad_threshold]=0.35 (default)
stable_whisper.visualize_suppression('audio.mp3', 'image.png', vad=True, vad_threshold=0.35)

vad Parameters: visualize_suppression()

Encode Comparison

You can encode videos similar to the ones in the doc for comparing transcriptions of the same audio.

stable_whisper.encode_video_comparison(
    'audio.mp3', 
    ['audio_sub1.srt', 'audio_sub2.srt'], 
    output_videopath='audio.mp4', 
    labels=['Example 1', 'Example 2']
)

Parameters: encode_video_comparison()

Tips

  • for reliable segment timestamps, do not disable word timestamps with word_timestamps=False because word timestamps are also used to correct segment timestamps
  • use demucs=True and vad=True for music but also works for non-music
  • if audio is not transcribing properly compared to whisper, try mel_first=True at the cost of more memory usage for long audio tracks
  • enable dynamic quantization to decrease memory usage for inference on CPU (also increases inference speed for large model); --dq true/dq=True for stable_whisper.load_model

Multiple Files with CLI

Transcribe multiple audio files then process the results directly into SRT files.

stable-ts audio1.mp3 audio2.mp3 audio3.mp3 -o audio1.srt audio2.srt audio3.srt

Quick 1.X → 2.X Guide

What's new in 2.0.0?

  • updated to use Whisper's more reliable word-level timestamps method.
  • the more reliable word timestamps allow regrouping all words into segments with more natural boundaries.
  • can now suppress silence with Silero VAD (requires PyTorch 1.12.0+)
  • non-VAD silence suppression is also more robust

Usage changes

  • results_to_sentence_srt(result, 'audio.srt')result.to_srt_vtt('audio.srt', word_level=False)
  • results_to_word_srt(result, 'audio.srt')result.to_srt_vtt('output.srt', segment_level=False)
  • results_to_sentence_word_ass(result, 'audio.srt')result.to_ass('output.ass')
  • there's no need to stabilize segments after inference because they're already stabilized during inference
  • transcribe() returns a WhisperResult object which can be converted to dict with .to_dict(). e.g result.to_dict()

License

This project is licensed under the MIT License - see the LICENSE file for details

Acknowledgments

Includes slight modification of the original work: Whisper

stable-ts's People

Contributors

jianfch avatar jerome-labonte-udem avatar navalnica avatar emiliobarradas avatar emiliskiskis avatar eschmidbauer avatar erdembocugoz avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.