GithubHelp home page GithubHelp logo

stable-ts's Introduction

Stabilizing Timestamps for Whisper

This library modifies Whisper to produce more reliable timestamps and extends its functionality.

demo1.mp4

Setup

Prerequisites: FFmpeg & PyTorch
FFmpeg

Requires FFmpeg in PATH

# on Ubuntu or Debian
sudo apt update && sudo apt install ffmpeg

# on Arch Linux
sudo pacman -S ffmpeg

# on MacOS using Homebrew (https://brew.sh/)
brew install ffmpeg

# on Windows using Chocolatey (https://chocolatey.org/)
choco install ffmpeg

# on Windows using Scoop (https://scoop.sh/)
scoop install ffmpeg
PyTorch

If PyTorch is not installed when installing Stable-ts, the default version will be installed which may not have GPU support. To avoid this issue, install your preferred version with instructions at https://pytorch.org/get-started/locally/.

pip install -U stable-ts

To install the latest commit:

pip install -U git+https://github.com/jianfch/stable-ts.git
Whisperless Version

To install Stable-ts without Whisper as a dependency:

pip install -U stable-ts-whisperless

To install the latest Whisperless commit:

pip install -U git+https://github.com/jianfch/stable-ts.git@whisperless

Usage

Transcribe

import stable_whisper
model = stable_whisper.load_model('base')
result = model.transcribe('audio.mp3')
result.to_srt_vtt('audio.srt')
CLI
stable-ts audio.mp3 -o audio.srt

Docstrings:

load_model()
Load an instance if :class:`whisper.model.Whisper`.

Parameters
----------
name : {'tiny', 'tiny.en', 'base', 'base.en', 'small', 'small.en', 'medium', 'medium.en', 'large-v1',
    'large-v2', 'large-v3', or 'large'}
    One of the official model names listed by :func:`whisper.available_models`, or
    path to a model checkpoint containing the model dimensions and the model state_dict.
device : str or torch.device, optional
    PyTorch device to put the model into.
download_root : str, optional
    Path to download the model files; by default, it uses "~/.cache/whisper".
in_memory : bool, default False
    Whether to preload the model weights into host memory.
cpu_preload : bool, default True
    Load model into CPU memory first then move model to specified device
    to reduce GPU memory usage when loading model
dq : bool, default False
    Whether to apply Dynamic Quantization to model to reduced memory usage and increase inference speed
    but at the cost of a slight decrease in accuracy. Only for CPU.
engine : str, optional
    Engine for Dynamic Quantization.

Returns
-------
model : "Whisper"
    The Whisper ASR model instance.

Notes
-----
The overhead from ``dq = True`` might make inference slower for models smaller than 'large'.
transcribe()
Transcribe audio using Whisper.

This is a modified version of :func:`whisper.transcribe.transcribe` with slightly different decoding logic while
allowing additional preprocessing and postprocessing. The preprocessing performed on the audio includes:
voice isolation / noise removal and low/high-pass filter. The postprocessing performed on the transcription
result includes: adjusting timestamps with VAD and custom regrouping segments based punctuation and speech gaps.

Parameters
----------
model : whisper.model.Whisper
    An instance of Whisper ASR model.
audio : str or numpy.ndarray or torch.Tensor or bytes or AudioLoader
    Path/URL to the audio file, the audio waveform, or bytes of audio file or
    instance of :class:`stable_whisper.audio.AudioLoader`.
    If audio is :class:`numpy.ndarray` or :class:`torch.Tensor`, the audio must be already at sampled to 16kHz.
verbose : bool or None, default False
    Whether to display the text being decoded to the console.
    Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.
temperature : float or iterable of float, default (0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
    Temperature for sampling. It can be a tuple of temperatures, which will be successfully used
    upon failures according to either ``compression_ratio_threshold`` or ``logprob_threshold``.
compression_ratio_threshold : float, default 2.4
    If the gzip compression ratio is above this value, treat as failed.
logprob_threshold : float, default -1
    If the average log probability over sampled tokens is below this value, treat as failed
no_speech_threshold : float, default 0.6
    If the no_speech probability is higher than this value AND the average log probability
    over sampled tokens is below ``logprob_threshold``, consider the segment as silent
condition_on_previous_text : bool, default True
    If ``True``, the previous output of the model is provided as a prompt for the next window;
    disabling may make the text inconsistent across windows, but the model becomes less prone to
    getting stuck in a failure loop, such as repetition looping or timestamps going out of sync.
initial_prompt : str, optional
    Text to provide as a prompt for the first window. This can be used to provide, or
    "prompt-engineer" a context for transcription, e.g. custom vocabularies or proper nouns
    to make it more likely to predict those word correctly.
word_timestamps : bool, default True
    Extract word-level timestamps using the cross-attention pattern and dynamic time warping,
    and include the timestamps for each word in each segment.
    Disabling this will prevent segments from splitting/merging properly.
regroup : bool or str, default True, meaning the default regroup algorithm
    String for customizing the regrouping algorithm. False disables regrouping.
    Ignored if ``word_timestamps = False``.
suppress_silence : bool, default True
    Whether to enable timestamps adjustments based on the detected silence.
suppress_word_ts : bool, default True
    Whether to adjust word timestamps based on the detected silence. Only enabled if ``suppress_silence = True``.
use_word_position : bool, default True
    Whether to use position of the word in its segment to determine whether to keep end or start timestamps if
    adjustments are required. If it is the first word, keep end. Else if it is the last word, keep the start.
q_levels : int, default 20
    Quantization levels for generating timestamp suppression mask; ignored if ``vad = true``.
    Acts as a threshold to marking sound as silent.
    Fewer levels will increase the threshold of volume at which to mark a sound as silent.
k_size : int, default 5
    Kernel size for avg-pooling waveform to generate timestamp suppression mask; ignored if ``vad = true``.
    Recommend 5 or 3; higher sizes will reduce detection of silence.
denoiser : str, optional
    String of the denoiser to use for preprocessing ``audio``.
    See ``stable_whisper.audio.SUPPORTED_DENOISERS`` for supported denoisers.
denoiser_options : dict, optional
    Options to use for ``denoiser``.
vad : bool or dict, default False
    Whether to use Silero VAD to generate timestamp suppression mask.
    Instead of ``True``, using a dict of keyword arguments will load the VAD with the arguments.
    Silero VAD requires PyTorch 1.12.0+. Official repo, https://github.com/snakers4/silero-vad.
vad_threshold : float, default 0.35
    Threshold for detecting speech with Silero VAD. Low threshold reduces false positives for silence detection.
min_word_dur : float or None, default None meaning use ``stable_whisper.default.DEFAULT_VALUES``
    Shortest duration each word is allowed to reach for silence suppression.
min_silence_dur : float, optional
    Shortest duration of silence allowed for silence suppression.
nonspeech_error : float, default 0.1
    Relative error of non-speech sections that appear in between a word for silence suppression.
only_voice_freq : bool, default False
    Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.
prepend_punctuations : str or None, default None meaning use ``stable_whisper.default.DEFAULT_VALUES``
    Punctuations to prepend to next word.
append_punctuations : str or None, default None meaning use ``stable_whisper.default.DEFAULT_VALUES``
    Punctuations to append to previous word.
stream : bool or None, default None
    Whether to loading ``audio`` in chunks of 30 seconds until the end of file/stream.
    If ``None`` and ``audio`` is a string then set to ``True`` else ``False``.
mel_first : bool, optional
    Process entire audio track into log-Mel spectrogram first instead in chunks.
    Used if odd behavior seen in stable-ts but not in whisper, but use significantly more memory for long audio.
split_callback : Callable, optional
    Custom callback for grouping tokens up with their corresponding words.
    The callback must take two arguments, list of tokens and tokenizer.
    The callback returns a tuple with a list of words and a corresponding nested list of tokens.
suppress_ts_tokens : bool, default False
    Whether to suppress timestamp tokens during inference for timestamps are detected at silent.
    Reduces hallucinations in some cases, but also prone to ignore disfluencies and repetitions.
    This option is ignored if ``suppress_silence = False``.
gap_padding : str, default ' ...'
    Padding prepend to each segments for word timing alignment.
    Used to reduce the probability of model predicting timestamps earlier than the first utterance.
only_ffmpeg : bool, default False
    Whether to use only FFmpeg (instead of not yt-dlp) for URls
max_instant_words : float, default 0.5
    If percentage of instantaneous words in a segment exceed this amount, the segment is removed.
avg_prob_threshold: float or None, default None
    Transcribe the gap after the previous word and if the average word proababiliy of a segment falls below this
    value, discard the segment. If ``None``, skip transcribing the gap to reduce chance of timestamps starting
    before the next utterance.
progress_callback : Callable, optional
    A function that will be called when transcription progress is updated.
    The callback need two parameters.
    The first parameter is a float for seconds of the audio that has been transcribed.
    The second parameter is a float for total duration of audio in seconds.
ignore_compatibility : bool, default False
    Whether to ignore warnings for compatibility issues with the detected Whisper version.
extra_models : list of whisper.model.Whisper, optional
    List of additional Whisper model instances to use for computing word-timestamps along with ``model``.
decode_options
    Keyword arguments to construct class:`whisper.decode.DecodingOptions` instances.

Returns
-------
stable_whisper.result.WhisperResult
    All timestamps, words, probabilities, and other data from the transcription of ``audio``.

See Also
--------
stable_whisper.non_whisper.transcribe_any : Return :class:`stable_whisper.result.WhisperResult` containing all the
    data from transcribing audio with unmodified :func:`whisper.transcribe.transcribe` with preprocessing and
    postprocessing.
stable_whisper.whisper_word_level.faster_whisper.faster_transcribe : Return
    :class:`stable_whisper.result.WhisperResult` containing all the data from transcribing audio with
    :meth:`faster_whisper.WhisperModel.transcribe` with preprocessing and postprocessing.

Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3', vad=True)
>>> result.to_srt_vtt('audio.srt')
Saved: audio.srt
transcribe_minimal()
Transcribe audio using Whisper.

This is uses the original whisper transcribe function, :func:`whisper.transcribe.transcribe`, while still allowing
additional preprocessing and postprocessing. The preprocessing performed on the audio includes: voice isolation /
noise removal and low/high-pass filter. The postprocessing performed on the transcription result includes:
adjusting timestamps with VAD and custom regrouping segments based punctuation and speech gaps.

Parameters
----------
model : whisper.model.Whisper
    An instance of Whisper ASR model.
audio : str or numpy.ndarray or torch.Tensor or bytes
    Path/URL to the audio file, the audio waveform, or bytes of audio file.
    If audio is ``numpy.ndarray`` or ``torch.Tensor``, the audio must be already at sampled to 16kHz.
verbose : bool or None, default False
    Whether to display the text being decoded to the console.
    Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.
word_timestamps : bool, default True
    Extract word-level timestamps using the cross-attention pattern and dynamic time warping,
    and include the timestamps for each word in each segment.
    Disabling this will prevent segments from splitting/merging properly.
regroup : bool or str, default True, meaning the default regroup algorithm
    String for customizing the regrouping algorithm. False disables regrouping.
    Ignored if ``word_timestamps = False``.
suppress_silence : bool, default True
    Whether to enable timestamps adjustments based on the detected silence.
suppress_word_ts : bool, default True
    Whether to adjust word timestamps based on the detected silence. Only enabled if ``suppress_silence = True``.
use_word_position : bool, default True
    Whether to use position of the word in its segment to determine whether to keep end or start timestamps if
    adjustments are required. If it is the first word, keep end. Else if it is the last word, keep the start.
q_levels : int, default 20
    Quantization levels for generating timestamp suppression mask; ignored if ``vad = true``.
    Acts as a threshold to marking sound as silent.
    Fewer levels will increase the threshold of volume at which to mark a sound as silent.
k_size : int, default 5
    Kernel size for avg-pooling waveform to generate timestamp suppression mask; ignored if ``vad = true``.
    Recommend 5 or 3; higher sizes will reduce detection of silence.
denoiser : str, optional
    String of the denoiser to use for preprocessing ``audio``.
    See ``stable_whisper.audio.SUPPORTED_DENOISERS`` for supported denoisers.
denoiser_options : dict, optional
    Options to use for ``denoiser``.
vad : bool or dict, default False
    Whether to use Silero VAD to generate timestamp suppression mask.
    Instead of ``True``, using a dict of keyword arguments will load the VAD with the arguments.
    Silero VAD requires PyTorch 1.12.0+. Official repo, https://github.com/snakers4/silero-vad.
vad_threshold : float, default 0.35
    Threshold for detecting speech with Silero VAD. Low threshold reduces false positives for silence detection.
min_word_dur : float, default 0.1
    Shortest duration each word is allowed to reach for silence suppression.
min_silence_dur : float, optional
    Shortest duration of silence allowed for silence suppression.
nonspeech_error : float, default 0.1
    Relative error of non-speech sections that appear in between a word for silence suppression.
only_voice_freq : bool, default False
    Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.
only_ffmpeg : bool, default False
    Whether to use only FFmpeg (instead of not yt-dlp) for URls
options
    Additional options used for :func:`whisper.transcribe.transcribe` and
    :func:`stable_whisper.non_whisper.transcribe_any`.
Returns
-------
stable_whisper.result.WhisperResult
    All timestamps, words, probabilities, and other data from the transcription of ``audio``.

Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe_minimal('audio.mp3', vad=True)
>>> result.to_srt_vtt('audio.srt')
Saved: audio.srt

faster-whisper

Use with faster-whisper:

model = stable_whisper.load_faster_whisper('base')
result = model.transcribe_stable('audio.mp3')
stable-ts audio.mp3 -o audio.srt -fw

Docstring:

load_faster_whisper()
Load an instance of :class:`faster_whisper.WhisperModel`.

Parameters
----------
model_size_or_path : {'tiny', 'tiny.en', 'base', 'base.en', 'small', 'small.en', 'medium', 'medium.en', 'large-v1',
    'large-v2', 'large-v3', or 'large'}
    Size of the model.

model_init_options
    Additional options to use for initialization of :class:`faster_whisper.WhisperModel`.

Returns
-------
faster_whisper.WhisperModel
    A modified instance with :func:`stable_whisper.whisper_word_level.load_faster_whisper.faster_transcribe`
    assigned to :meth:`faster_whisper.WhisperModel.transcribe_stable`.
transcribe_stable()
Transcribe audio using faster-whisper (https://github.com/guillaumekln/faster-whisper).

This is uses the transcribe method from faster-whisper, :meth:`faster_whisper.WhisperModel.transcribe`, while
still allowing additional preprocessing and postprocessing. The preprocessing performed on the audio includes:
voice isolation / noise removal and low/high-pass filter. The postprocessing performed on the
transcription result includes: adjusting timestamps with VAD and custom regrouping segments based punctuation
and speech gaps.

Parameters
----------
model : faster_whisper.WhisperModel
    The faster-whisper ASR model instance.
audio : str or numpy.ndarray or torch.Tensor or bytes
    Path/URL to the audio file, the audio waveform, or bytes of audio file.
    If audio is :class:`numpy.ndarray` or :class:`torch.Tensor`, the audio must be already at sampled to 16kHz.
verbose : bool or None, default False
    Whether to display the text being decoded to the console.
    Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.
word_timestamps : bool, default True
    Extract word-level timestamps using the cross-attention pattern and dynamic time warping,
    and include the timestamps for each word in each segment.
    Disabling this will prevent segments from splitting/merging properly.
regroup : bool or str, default True, meaning the default regroup algorithm
    String for customizing the regrouping algorithm. False disables regrouping.
    Ignored if ``word_timestamps = False``.
suppress_silence : bool, default True
    Whether to enable timestamps adjustments based on the detected silence.
suppress_word_ts : bool, default True
    Whether to adjust word timestamps based on the detected silence. Only enabled if ``suppress_silence = True``.
use_word_position : bool, default True
    Whether to use position of the word in its segment to determine whether to keep end or start timestamps if
    adjustments are required. If it is the first word, keep end. Else if it is the last word, keep the start.
q_levels : int, default 20
    Quantization levels for generating timestamp suppression mask; ignored if ``vad = true``.
    Acts as a threshold to marking sound as silent.
    Fewer levels will increase the threshold of volume at which to mark a sound as silent.
k_size : int, default 5
    Kernel size for avg-pooling waveform to generate timestamp suppression mask; ignored if ``vad = true``.
    Recommend 5 or 3; higher sizes will reduce detection of silence.
denoiser : str, optional
    String of the denoiser to use for preprocessing ``audio``.
    See ``stable_whisper.audio.SUPPORTED_DENOISERS`` for supported denoisers.
denoiser_options : dict, optional
    Options to use for ``denoiser``.
vad : bool or dict, default False
    Whether to use Silero VAD to generate timestamp suppression mask.
    Instead of ``True``, using a dict of keyword arguments will load the VAD with the arguments.
    Silero VAD requires PyTorch 1.12.0+. Official repo, https://github.com/snakers4/silero-vad.
vad_threshold : float, default 0.35
    Threshold for detecting speech with Silero VAD. Low threshold reduces false positives for silence detection.
min_word_dur : float or None, default None meaning use ``stable_whisper.default.DEFAULT_VALUES``
    Shortest duration each word is allowed to reach for silence suppression.
min_silence_dur : float, optional
    Shortest duration of silence allowed for silence suppression.
nonspeech_error : float, default 0.3
    Relative error of non-speech sections that appear in between a word for silence suppression.
only_voice_freq : bool, default False
    Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.
only_ffmpeg : bool, default False
    Whether to use only FFmpeg (instead of not yt-dlp) for URls
check_sorted : bool, default True
    Whether to raise an error when timestamps returned by faster-whipser are not in ascending order.
progress_callback : Callable, optional
    A function that will be called when transcription progress is updated.
    The callback need two parameters.
    The first parameter is a float for seconds of the audio that has been transcribed.
    The second parameter is a float for total duration of audio in seconds.
options
    Additional options used for :meth:`faster_whisper.WhisperModel.transcribe` and
    :func:`stable_whisper.non_whisper.transcribe_any`.

Returns
-------
stable_whisper.result.WhisperResult
    All timestamps, words, probabilities, and other data from the transcription of ``audio``.

Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_faster_whisper('base')
>>> result = model.transcribe_stable('audio.mp3', vad=True)
>>> result.to_srt_vtt('audio.srt')
Saved: audio.srt
Hugging Face Transformers (~9x faster)

Run Whisper up to 9x faster with Hugging Face Transformer:

model = stable_whisper.load_hf_whisper('base')
result = model.transcribe('audio.mp3')
CLI
NOT IMPLEMETED YET

Output

output_demo.mp4

Stable-ts supports various text output formats.

result.to_srt_vtt('audio.srt') #SRT
result.to_srt_vtt('audio.vtt') #VTT
result.to_ass('audio.ass') #ASS
result.to_tsv('audio.tsv') #TSV

Docstrings:

result_to_srt_vtt()
Generate SRT/VTT from ``result`` to display segment-level and/or word-level timestamp.

Parameters
----------
result : dict or list or stable_whisper.result.WhisperResult
    Result of transcription.
filepath : str, default None, meaning content will be returned as a ``str``
    Path to save file.
segment_level : bool, default True
    Whether to use segment-level timestamps in output.
word_level : bool, default True
    Whether to use word-level timestamps in output.
min_dur : float, default 0.2
    Minimum duration allowed for any word/segment before the word/segments are merged with adjacent word/segments.
tag: tuple of (str, str), default None, meaning ('<font color="#00ff00">', '</font>') if SRT else ('<u>', '</u>')
    Tag used to change the properties a word at its timestamp.
vtt : bool, default None, meaning determined by extension of ``filepath`` or ``False`` if no valid extension.
    Whether to output VTT.
strip : bool, default True
    Whether to remove spaces before and after text on each segment for output.
reverse_text: bool or tuple, default False
    Whether to reverse the order of words for each segment or provide the ``prepend_punctuations`` and
    ``append_punctuations`` as tuple pair instead of ``True`` which is for the default punctuations.

Returns
-------
str
    String of the content if ``filepath`` is ``None``.

Notes
-----
``reverse_text`` will not fix RTL text not displaying tags properly which is an issue with some video player. VLC
seems to not suffer from this issue.

Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3')
>>> result.to_srt_vtt('audio.srt')
Saved: audio.srt
result_to_ass()
Generate Advanced SubStation Alpha (ASS) file from ``result`` to display segment-level and/or word-level timestamp.

Parameters
----------
result : dict or list or stable_whisper.result.WhisperResult
    Result of transcription.
filepath : str, default None, meaning content will be returned as a ``str``
    Path to save file.
segment_level : bool, default True
    Whether to use segment-level timestamps in output.
word_level : bool, default True
    Whether to use word-level timestamps in output.
min_dur : float, default 0.2
    Minimum duration allowed for any word/segment before the word/segments are merged with adjacent word/segments.
tag: tuple of (str, str) or int, default None, meaning use default highlighting
    Tag used to change the properties a word at its timestamp. -1 for individual word highlight tag.
font : str, default `Arial`
    Word font.
font_size : int, default 48
    Word font size.
strip : bool, default True
    Whether to remove spaces before and after text on each segment for output.
highlight_color : str, default '00ff00'
    Hexadecimal of the color use for default highlights as '<bb><gg><rr>'.
karaoke : bool, default False
    Whether to use progressive filling highlights (for karaoke effect).
reverse_text: bool or tuple, default False
    Whether to reverse the order of words for each segment or provide the ``prepend_punctuations`` and
    ``append_punctuations`` as tuple pair instead of ``True`` which is for the default punctuations.
kwargs:
    Format styles:
    'Name', 'Fontname', 'Fontsize', 'PrimaryColour', 'SecondaryColour', 'OutlineColour', 'BackColour', 'Bold',
    'Italic', 'Underline', 'StrikeOut', 'ScaleX', 'ScaleY', 'Spacing', 'Angle', 'BorderStyle', 'Outline',
    'Shadow', 'Alignment', 'MarginL', 'MarginR', 'MarginV', 'Encoding'

Returns
-------
str
    String of the content if ``filepath`` is ``None``.

Notes
-----
``reverse_text`` will not fix RTL text not displaying tags properly which is an issue with some video player. VLC
seems to not suffer from this issue.

Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3')
>>> result.to_ass('audio.ass')
Saved: audio.ass
result_to_tsv()
Generate TSV from ``result`` to display segment-level and/or word-level timestamp.

Parameters
----------
result : dict or list or stable_whisper.result.WhisperResult
    Result of transcription.
filepath : str, default None, meaning content will be returned as a ``str``
    Path to save file.
segment_level : bool, default True
    Whether to use segment-level timestamps in output.
word_level : bool, default True
    Whether to use word-level timestamps in output.
min_dur : float, default 0.2
    Minimum duration allowed for any word/segment before the word/segments are merged with adjacent word/segments.
strip : bool, default True
    Whether to remove spaces before and after text on each segment for output.
reverse_text: bool or tuple, default False
    Whether to reverse the order of words for each segment or provide the ``prepend_punctuations`` and
    ``append_punctuations`` as tuple pair instead of ``True`` which is for the default punctuations.

Returns
-------
str
    String of the content if ``filepath`` is ``None``.

Notes
-----
``reverse_text`` will not fix RTL text not displaying tags properly which is an issue with some video player. VLC
seems to not suffer from this issue.

Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3')
>>> result.to_tsv('audio.tsv')
Saved: audio.tsv
result_to_txt()
Generate plain-text without timestamps from ``result``.

Parameters
----------
result : dict or list or stable_whisper.result.WhisperResult
    Result of transcription.
filepath : str, default None, meaning content will be returned as a ``str``
    Path to save file.
min_dur : float, default 0.2
    Minimum duration allowed for any word/segment before the word/segments are merged with adjacent word/segments.
strip : bool, default True
    Whether to remove spaces before and after text on each segment for output.
reverse_text: bool or tuple, default False
    Whether to reverse the order of words for each segment or provide the ``prepend_punctuations`` and
    ``append_punctuations`` as tuple pair instead of ``True`` which is for the default punctuations.

Returns
-------
str
    String of the content if ``filepath`` is ``None``.

Notes
-----
``reverse_text`` will not fix RTL text not displaying tags properly which is an issue with some video player. VLC
seems to not suffer from this issue.

Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3')
>>> result.to_txt('audio.txt')
Saved: audio.txt
save_as_json()
Save ``result`` as JSON file to ``path``.

Parameters
----------
result : dict or list or stable_whisper.result.WhisperResult
    Result of transcription.
path : str
    Path to save file.
ensure_ascii : bool, default False
    Whether to escape non-ASCII characters.

Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3')
>>> result.save_as_json('audio.json')
Saved: audio.json



There are word-level and segment-level timestamps. All output formats support them. They also support will both levels simultaneously except TSV. By default, segment_level and word_level are both True for all the formats that support both simultaneously.

Examples in VTT.

Default: segment_level=True + word_level=True

CLI

--segment_level true + --word_level true

00:00:07.760 --> 00:00:09.900
But<00:00:07.860> when<00:00:08.040> you<00:00:08.280> arrived<00:00:08.580> at<00:00:08.800> that<00:00:09.000> distant<00:00:09.400> world,

segment_level=True + word_level=False

00:00:07.760 --> 00:00:09.900
But when you arrived at that distant world,

segment_level=False + word_level=True

00:00:07.760 --> 00:00:07.860
But

00:00:07.860 --> 00:00:08.040
when

00:00:08.040 --> 00:00:08.280
you

00:00:08.280 --> 00:00:08.580
arrived

...

JSON

The result can also be saved as a JSON file to preserve all the data for future reprocessing. This is useful for testing different sets of postprocessing arguments without the need to redo inference.

result.save_as_json('audio.json')
CLI
stable-ts audio.mp3 -o audio.json

Processing JSON file of the results into SRT.

result = stable_whisper.WhisperResult('audio.json')
result.to_srt_vtt('audio.srt')
CLI
stable-ts audio.json -o audio.srt

Alignment

Audio can be aligned/synced with plain text on word-level.

text = 'Machines thinking, breeding. You were to bear us a new, promised land.'
result = model.align('audio.mp3', text, language='en')

When the text is correct but the timestamps need more work, align() is a faster alternative for testing various settings/models.

new_result = model.align('audio.mp3', result, language='en')
CLI
stable-ts audio.mp3 --align text.txt --language en

--align can also a JSON file of a result

Docstring:

align()
Align plain text or tokens with audio at word-level.

Since this is significantly faster than transcribing, it is a more efficient method for testing various settings
without re-transcribing. This is also useful for timing a more correct transcript than one that Whisper can produce.

Parameters
----------
model : "Whisper"
    The Whisper ASR model modified instance
audio : str or numpy.ndarray or torch.Tensor or bytes or AudioLoader
    Path/URL to the audio file, the audio waveform, or bytes of audio file or
    instance of :class:`stable_whisper.audio.AudioLoader`.
    If audio is :class:`numpy.ndarray` or :class:`torch.Tensor`, the audio must be already at sampled to 16kHz.
text : str or list of int or stable_whisper.result.WhisperResult
    String of plain-text, list of tokens, or instance of :class:`stable_whisper.result.WhisperResult`.
language : str, default None, uses ``language`` in ``text`` if it is a :class:`stable_whisper.result.WhisperResult`
    Language of ``text``. Required if ``text`` does not contain ``language``.
remove_instant_words : bool, default False
    Whether to truncate any words with zero duration.
token_step : int, default 100
    Max number of tokens to align each pass. Use higher values to reduce chance of misalignment.
original_split : bool, default False
    Whether to preserve the original segment groupings. Segments are spit by line break if ``text`` is plain-text.
max_word_dur : float or None, default 3.0
    Global maximum word duration in seconds. Re-align words that exceed the global maximum word duration.
word_dur_factor : float or None, default 2.0
    Factor to compute the Local maximum word duration, which is ``word_dur_factor`` * local medium word duration.
    Words that need re-alignment, are re-algined with duration <= local/global maximum word duration.
nonspeech_skip : float or None, default 5.0
    Skip non-speech sections that are equal or longer than this duration in seconds. Disable skipping if ``None``.
fast_mode : bool, default False
    Whether to speed up alignment by re-alignment with local/global maximum word duration.
    ``True`` tends produce better timestamps when ``text`` is accurate and there are no large speechless gaps.
tokenizer : "Tokenizer", default None, meaning a new tokenizer is created according ``language`` and ``model``
    A tokenizer to used tokenizer text and detokenize tokens.
stream : bool or None, default None
    Whether to loading ``audio`` in chunks of 30 seconds until the end of file/stream.
    If ``None`` and ``audio`` is a string then set to ``True`` else ``False``.
failure_threshold : float, optional
    Abort alignment when percentage of words with zero duration exceeds ``failure_threshold``.
verbose : bool or None, default False
    Whether to display the text being decoded to the console.
    Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.
regroup : bool or str, default True, meaning the default regroup algorithm
    String for customizing the regrouping algorithm. False disables regrouping.
    Ignored if ``word_timestamps = False``.
suppress_silence : bool, default True
    Whether to enable timestamps adjustments based on the detected silence.
suppress_word_ts : bool, default True
    Whether to adjust word timestamps based on the detected silence. Only enabled if ``suppress_silence = True``.
use_word_position : bool, default True
    Whether to use position of the word in its segment to determine whether to keep end or start timestamps if
    adjustments are required. If it is the first word, keep end. Else if it is the last word, keep the start.
q_levels : int, default 20
    Quantization levels for generating timestamp suppression mask; ignored if ``vad = true``.
    Acts as a threshold to marking sound as silent.
    Fewer levels will increase the threshold of volume at which to mark a sound as silent.
k_size : int, default 5
    Kernel size for avg-pooling waveform to generate timestamp suppression mask; ignored if ``vad = true``.
    Recommend 5 or 3; higher sizes will reduce detection of silence.
denoiser : str, optional
    String of the denoiser to use for preprocessing ``audio``.
    See ``stable_whisper.audio.SUPPORTED_DENOISERS`` for supported denoisers.
denoiser_options : dict, optional
    Options to use for ``denoiser``.
vad : bool or dict, default False
    Whether to use Silero VAD to generate timestamp suppression mask.
    Instead of ``True``, using a dict of keyword arguments will load the VAD with the arguments.
    Silero VAD requires PyTorch 1.12.0+. Official repo, https://github.com/snakers4/silero-vad.
vad_threshold : float, default 0.35
    Threshold for detecting speech with Silero VAD. Low threshold reduces false positives for silence detection.
min_word_dur : float or None, default None meaning use ``stable_whisper.default.DEFAULT_VALUES``
    Shortest duration each word is allowed to reach for silence suppression.
min_silence_dur : float, optional
    Shortest duration of silence allowed for silence suppression.
nonspeech_error : float, default 0.1
    Relative error of non-speech sections that appear in between a word for silence suppression.
only_voice_freq : bool, default False
    Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.
prepend_punctuations : str or None, default None meaning use ``stable_whisper.default.DEFAULT_VALUES``
    Punctuations to prepend to next word.
append_punctuations : str or None, default None meaning use ``stable_whisper.default.DEFAULT_VALUES``
    Punctuations to append to previous word.
progress_callback : Callable, optional
    A function that will be called when transcription progress is updated.
    The callback need two parameters.
    The first parameter is a float for seconds of the audio that has been transcribed.
    The second parameter is a float for total duration of audio in seconds.
ignore_compatibility : bool, default False
    Whether to ignore warnings for compatibility issues with the detected Whisper version.
extra_models : list of whisper.model.Whisper, optional
    List of additional Whisper model instances to use for computing word-timestamps along with ``model``.
presplit : bool or list of str, default True meaning ['.', '。', '?', '?']
    List of ending punctuation used to split ``text`` into segments for applying ``gap_padding``,
    but segmentation of final output is unnaffected unless ``original_split=True``.
    If ``original_split=True``, the original split is used instead of split from ``presplit``.
    Ignored if ``model`` is a faster-whisper model.
gap_padding : str, default ' ...'
    Only if ``presplit=True``, ``gap_padding`` is prepended to each segments for word timing alignment.
    Used to reduce the probability of model predicting timestamps earlier than the first utterance.
    Ignored if ``model`` is a faster-whisper model.

Returns
-------
stable_whisper.result.WhisperResult or None
    All timestamps, words, probabilities, and other data from the alignment of ``audio``. Return None if alignment
    fails and ``remove_instant_words = True``.

Notes
-----
If ``token_step`` is less than 1, ``token_step`` will be set to its maximum value, 442. This value is computed with
``whisper.model.Whisper.dims.n_text_ctx`` - 6.

IF ``original_split = True`` and a line break is found in middle of a word in ``text``, the split will occur after
that word.

``regroup`` is ignored if ``original_split = True``.

Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.align('helloworld.mp3', 'Hello, World!', 'English')
>>> result.to_srt_vtt('helloword.srt')
Saved 'helloworld.srt'

Adjustments

Timestamps are adjusted after the model predicts them. When suppress_silence=True (default), transcribe()/transcribe_minimal()/align() adjust based on silence/non-speech. The timestamps can be further adjusted base on another result with adjust_by_result(), which acts as a logical AND operation for the timestamps of both results, further reducing duration of each word. Note: both results are required to have word timestamps and matching words.

# the adjustments are in-place for `result`
result.adjust_by_result(new_result)

Docstring:

adjust_by_result()
    Minimize the duration of words using timestamps of another result.
    
    Parameters
    ----------
    other_result : "WhisperResult"
        Timing data of the same words in a WhisperResult instance.
    min_word_dur : float or None, default None meaning use ``stable_whisper.default.DEFAULT_VALUES``
        Prevent changes to timestamps if the resultant word duration is less than ``min_word_dur``.
    verbose : bool, default False
        Whether to print out the timestamp changes.

Refinement

Timestamps can be further improved with refine(). This method iteratively mutes portions of the audio based on current timestamps then compute the probabilities of the tokens. Then by monitoring the fluctuation of the probabilities, it tries to find the most precise timestamps. "Most precise" in this case means the latest start and earliest end for the word such that it still meets the specified conditions.

model.refine('audio.mp3', result)
CLI
stable-ts audio.mp3 --refine -o audio.srt

Input can also be JSON file of a result.

stable-ts result.json --refine -o audio.srt --refine_option "audio=audio.mp3"

Docstring:

refine()
Improve existing timestamps.

This function iteratively muting portions of the audio and monitoring token probabilities to find the most precise
timestamps. This "most precise" in this case means the latest start and earliest end of a word that maintains an
acceptable probability determined by the specified arguments.

This is useful readjusting timestamps when they start too early or end too late.

Parameters
----------
model : "Whisper"
    The Whisper ASR model modified instance
audio : str or numpy.ndarray or torch.Tensor or bytes
    Path/URL to the audio file, the audio waveform, or bytes of audio file.
    If audio is :class:`numpy.ndarray` or :class:`torch.Tensor`, the audio must be already at sampled to 16kHz.
result : stable_whisper.result.WhisperResult
    All timestamps, words, probabilities, and other data from the transcription of ``audio``.
steps : str, default 'se'
    Instructions for refinement. A 's' means refine start-timestamps. An 'e' means refine end-timestamps.
rel_prob_decrease : float, default 0.3
    Maximum percent decrease in probability relative to original probability which is the probability from muting
    according initial timestamps.
abs_prob_decrease : float, default 0.05
    Maximum decrease in probability from original probability.
rel_rel_prob_decrease : float, optional
    Maximum percent decrease in probability relative to previous probability which is the probability from previous
    iteration of muting.
prob_threshold : float, default 0.5
    Stop refining the timestamp if the probability of its token goes below this value.
rel_dur_change : float, default 0.5
    Maximum percent change in duration of a word relative to its original duration.
abs_dur_change : float, optional
    Maximum seconds a word is allowed deviate from its original duration.
word_level : bool, default True
    Whether to refine timestamps on word-level. If ``False``, only refine start/end timestamps of each segment.
precision : float, default 0.1
    Precision of refined timestamps in seconds. The lowest precision is 0.02 second.
single_batch : bool, default False
    Whether to process in only batch size of one to reduce memory usage.
inplace : bool, default True, meaning return a deepcopy of ``result``
    Whether to alter timestamps in-place.
demucs : bool or torch.nn.Module, default False
    Whether to preprocess ``audio`` with Demucs to isolate vocals / remove noise. Set ``demucs`` to an instance of
    a Demucs model to avoid reloading the model for each run.
    Demucs must be installed to use. Official repo, https://github.com/facebookresearch/demucs.
demucs_options : dict, optional
    Options to use for :func:`stable_whisper.audio.demucs_audio`.
only_voice_freq : bool, default False
    Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.
verbose : bool or None, default False
    Whether to display the text being decoded to the console.
    Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.

Returns
-------
stable_whisper.result.WhisperResult
    All timestamps, words, probabilities, and other data from the refinement of ``text`` with ``audio``.

Notes
-----
The lower the ``precision``, the longer the processing time.

Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3')
>>> model.refine('audio.mp3', result)
>>> result.to_srt_vtt('audio.srt')
Saved 'audio.srt'

Regrouping Words

Stable-ts has a preset for regrouping words into different segments with more natural boundaries. This preset is enabled by regroup=True (default). But there are other built-in regrouping methods that allow you to customize the regrouping algorithm. This preset is just a predefined combination of those methods.

demo2.mp4
# The following results are all functionally equivalent:
result0 = model.transcribe('audio.mp3', regroup=True) # regroup is True by default
result1 = model.transcribe('audio.mp3', regroup=False)
(
    result1
    .clamp_max()
    .split_by_punctuation([(',', ' '), ','])
    .split_by_gap(.5)
    .merge_by_gap(.3, max_words=3)
    .split_by_punctuation([('.', ' '), '。', '?', '?'])
)
result2 = model.transcribe('audio.mp3', regroup='cm_sp=,* /,_sg=.5_mg=.3+3_sp=.* /。/?/?')

# To undo all regrouping operations:
result0.reset()

Any regrouping algorithm can be expressed as a string. Please feel free share your strings here

Regrouping Methods

regroup()
    Regroup (in-place) words into segments.

    Parameters
    ----------
    regroup_algo: str or bool, default 'da'
         String representation of a custom regrouping algorithm or ``True`` use to the default algorithm 'da'.
    verbose : bool, default False
        Whether to show all the methods and arguments parsed from ``regroup_algo``.
    only_show : bool, default False
        Whether to show the all methods and arguments parsed from ``regroup_algo`` without running the methods

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.

    Notes
    -----
    Syntax for string representation of custom regrouping algorithm.
        Method keys:
            sg: split_by_gap
            sp: split_by_punctuation
            sl: split_by_length
            sd: split_by_duration
            mg: merge_by_gap
            mp: merge_by_punctuation
            ms: merge_all_segment
            cm: clamp_max
            l: lock
            us: unlock_all_segments
            da: default algorithm (cm_sp=,* /,_sg=.5_mg=.3+3_sp=.* /。/?/?)
            rw: remove_word
            rs: remove_segment
            rp: remove_repetition
            rws: remove_words_by_str
            fg: fill_in_gaps
        Metacharacters:
            = separates a method key and its arguments (not used if no argument)
            _ separates method keys (after arguments if there are any)
            + separates arguments for a method key
            / separates an argument into list of strings
            * separates an item in list of strings into a nested list of strings
        Notes:
        -arguments are parsed positionally
        -if no argument is provided, the default ones will be used
        -use 1 or 0 to represent True or False
        Example 1:
            merge_by_gap(.2, 10, lock=True)
            mg=.2+10+++1
            Note: [lock] is the 5th argument hence the 2 missing arguments inbetween the three + before 1
        Example 2:
            split_by_punctuation([('.', ' '), '。', '?', '?'], True)
            sp=.* /。/?/?+1
        Example 3:
            merge_all_segments().split_by_gap(.5).merge_by_gap(.15, 3)
            ms_sg=.5_mg=.15+3
split_by_gap()
    Split (in-place) any segment where the gap between two of its words is greater than ``max_gap``.

    Parameters
    ----------
    max_gap : float, default 0.1
        Maximum second(s) allowed between two words if the same segment.
    lock : bool, default False
        Whether to prevent future splits/merges from altering changes made by this method.
    newline: bool, default False
        Whether to insert line break at the split points instead of splitting into separate segments.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.
split_by_punctuation()
    Split (in-place) segments at words that start/end with ``punctuation``.

    Parameters
    ----------
    punctuation : list of str of list of tuple of (str, str) or str
        Punctuation(s) to split segments by.
    lock : bool, default False
        Whether to prevent future splits/merges from altering changes made by this method.
    newline : bool, default False
        Whether to insert line break at the split points instead of splitting into separate segments.
    min_words : int, optional
        Split segments with words >= ``min_words``.
    min_chars : int, optional
        Split segments with characters >= ``min_chars``.
    min_dur : int, optional
        split segments with duration (in seconds) >= ``min_dur``.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.
split_by_length()
    Split (in-place) any segment that exceeds ``max_chars`` or ``max_words`` into smaller segments.

    Parameters
    ----------
    max_chars : int, optional
        Maximum number of characters allowed in each segment.
    max_words : int, optional
        Maximum number of words allowed in each segment.
    even_split : bool, default True
        Whether to evenly split a segment in length if it exceeds ``max_chars`` or ``max_words``.
    force_len : bool, default False
        Whether to force a constant length for each segment except the last segment.
        This will ignore all previous non-locked segment boundaries.
    lock : bool, default False
        Whether to prevent future splits/merges from altering changes made by this method.
    include_lock: bool, default False
        Whether to include previous lock before splitting based on max_words, if ``even_split = False``.
        Splitting will be done after the first non-locked word > ``max_chars`` / ``max_words``.
    newline: bool, default False
        Whether to insert line break at the split points instead of splitting into separate segments.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.

    Notes
    -----
    If ``even_split = True``, segments can still exceed ``max_chars`` and locked words will be ignored to avoid
    uneven splitting.
split_by_duration()
    Split (in-place) any segment that exceeds ``max_dur`` into smaller segments.

    Parameters
    ----------
    max_dur : float
        Maximum duration (in seconds) per segment.
    even_split : bool, default True
        Whether to evenly split a segment in length if it exceeds ``max_dur``.
    force_len : bool, default False
        Whether to force a constant length for each segment except the last segment.
        This will ignore all previous non-locked segment boundaries.
    lock : bool, default False
        Whether to prevent future splits/merges from altering changes made by this method.
    include_lock: bool, default False
        Whether to include previous lock before splitting based on max_words, if ``even_split = False``.
        Splitting will be done after the first non-locked word > ``max_dur``.
    newline: bool, default False
        Whether to insert line break at the split points instead of splitting into separate segments.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.

    Notes
    -----
    If ``even_split = True``, segments can still exceed ``max_dur`` and locked words will be ignored to avoid
    uneven splitting.
merge_by_gap()
    Merge (in-place) any pair of adjacent segments if the gap between them <= ``min_gap``.

    Parameters
    ----------
    min_gap : float, default 0.1
        Minimum second(s) allow between two segment.
    max_words : int, optional
        Maximum number of words allowed in each segment.
    max_chars : int, optional
        Maximum number of characters allowed in each segment.
    is_sum_max : bool, default False
        Whether ``max_words`` and ``max_chars`` is applied to the merged segment instead of the individual segments
        to be merged.
    lock : bool, default False
        Whether to prevent future splits/merges from altering changes made by this method.
    newline : bool, default False
        Whether to insert a line break between the merged segments.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.
merge_by_punctuation()
    Merge (in-place) any two segments that has specific punctuations inbetween.

    Parameters
    ----------
    punctuation : list of str of list of tuple of (str, str) or str
        Punctuation(s) to merge segments by.
    max_words : int, optional
        Maximum number of words allowed in each segment.
    max_chars : int, optional
        Maximum number of characters allowed in each segment.
    is_sum_max : bool, default False
        Whether ``max_words`` and ``max_chars`` is applied to the merged segment instead of the individual segments
        to be merged.
    lock : bool, default False
        Whether to prevent future splits/merges from altering changes made by this method.
    newline : bool, default False
        Whether to insert a line break between the merged segments.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.
merge_all_segments()
    Merge all segments into one segment.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.
clamp_max()
    Clamp all word durations above certain value.

    This is most effective when applied before and after other regroup operations.

    Parameters
    ----------
    medium_factor : float, default 2.5
        Clamp durations above (``medium_factor`` * medium duration) per segment.
        If ``medium_factor = None/0`` or segment has less than 3 words, it will be ignored and use only ``max_dur``.
    max_dur : float, optional
        Clamp durations above ``max_dur``.
    clip_start : bool or None, default None
        Whether to clamp the start of a word. If ``None``, clamp the start of first word and end of last word per
        segment.
    verbose : bool, default False
        Whether to print out the timestamp changes.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.
lock()
    Lock words/segments with matching prefix/suffix to prevent splitting/merging.

    Parameters
    ----------
    startswith: str or list of str
        Prefixes to lock.
    endswith: str or list of str
        Suffixes to lock.
    right : bool, default True
        Whether prevent splits/merges with the next word/segment.
    left : bool, default False
        Whether prevent splits/merges with the previous word/segment.
    case_sensitive : bool, default False
        Whether to match the case of the prefixes/suffixes with the words/segments.
    strip : bool, default True
        Whether to ignore spaces before and after both words/segments and prefixes/suffixes.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.
pad()
    Pad (in-place) timestamps in chronological order.

    Parameters
    ----------
    start_pad : float, optional
        Seconds to pad start timestamps.
        Each start timestamp will be extended no earlier than the end timestamp of the previous word.
    end_pad : float, optional
        Seconds to pad end timestamps.
        Each end timestamp will be extended no later than the start timestamp of the next word or ``max_end``.
    max_dur : float, optional
        Only pad segments or words (``word_level=True``) with duration (in seconds) under or equal to ``max_dur``.
    max_end : float, optional
        Timestamp (in seconds) that padded timestamps cannot exceed.
        Generally used to prevent the last padded end timestamp from exceeding the total duration of the audio.
    word_level : bool, default False
        Whether to pad segment timestamps or word timestamps.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.

Editing

The editing methods in stable-ts can be chained with Regrouping Methods and used in regroup().

Remove specific instances words or segments:

# Remove first word of the first segment:
first_word = result[0][0]
result.remove_word(first_word)
# This following is also does the same:
del result[0][0]

# Remove the last segment:
last_segment = result[-1]
result.remove_segment(last_segment)
# This following is also does the same:
del result[-1]

Docstrings:

remove_word()
    Remove a word.

    Parameters
    ----------
    word : WordTiming or tuple of (int, int)
        Instance of :class:`stable_whisper.result.WordTiming` or tuple of (segment index, word index).
    reassign_ids : bool, default True
        Whether to reassign segment and word ids (indices) after removing ``word``.
    verbose : bool, default True
        Whether to print detail of the removed word.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.
remove_segment()
    Remove a segment.

    Parameters
    ----------
    segment : Segment or int
        Instance :class:`stable_whisper.result.Segment` or segment index.
    reassign_ids : bool, default True
        Whether to reassign segment IDs (indices) after removing ``segment``.
    verbose : bool, default True
        Whether to print detail of the removed word.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.

Removing repetitions:

# Example 1: "This is is is a test." -> "This is a test."
# The following removes the last two " is":
result.remove_repetition(1)

# Example 2: "This is is is a test this is a test." -> "This is a test."
# The following removes the second " is" and third " is", then remove the last "this is a test"
# The first parameter `max_words` is `4` because "this is a test" consists 4 words
result.remove_repetition(4)

Docstring:

remove_repetition()
    Remove words that repeat consecutively.

    Parameters
    ----------
    max_words : int
        Maximum number of words to look for consecutively.
    case_sensitive : bool, default False
        Whether the case of words need to match to be considered as repetition.
    strip : bool, default True
        Whether to ignore spaces before and after each word.
    ignore_punctuations : bool, default '"',.?!'
        Ending punctuations to ignore.
    extend_duration: bool, default True
        Whether to extend the duration of the previous word to cover the duration of the repetition.
    verbose: bool, default True
        Whether to print detail of the removed repetitions.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.

Removing specific word(s) by string content:

# Remove all " ok" from " ok ok this is a test."
result.remove_words_by_str('ok')

# Remove all " ok" and " Um..." from " ok this is a test. Um..."
result.remove_words_by_str(['ok', 'um'])

Docstring:

remove_words_by_str()
    Remove words that match ``words``.

    Parameters
    ----------
    words : str or list of str or None
        A word or list of words to remove.``None`` for all words to be passed into ``filters``.
    case_sensitive : bool, default False
        Whether the case of words need to match to be considered as repetition.
    strip : bool, default True
        Whether to ignore spaces before and after each word.
    ignore_punctuations : bool, default '"',.?!'
        Ending punctuations to ignore.
    min_prob : float, optional
        Acts as the first filter the for the words that match ``words``. Words with probability < ``min_prob`` will
        be removed if ``filters`` is ``None``, else pass the words into ``filters``. Words without probability will
        be treated as having probability < ``min_prob``.
    filters : Callable, optional
        A function that takes an instance of :class:`stable_whisper.result.WordTiming` as its only argument.
        This function is custom filter for the words that match ``words`` and were not caught by ``min_prob``.
    verbose:
        Whether to print detail of the removed words.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.

Filling in segment gaps:

# result0:             [" How are you?"] [" I'm good."]                     [" Good!"]
# result1: [" Hello!"] [" How are you?"]                [" How about you?"] [" Good!"]
result0.fill_in_gaps(result1)
# After filling in the gaps in `result0` with contents in `result1`:
# result0: [" Hello!"] [" How are you?"] [" I'm good."] [" How about you?"] [" Good!"]

Docstring:

fill_in_gaps()
    Fill in segment gaps larger than ``min_gap`` with content from ``other_result`` at the times of gaps.

    Parameters
    ----------
    other_result : WhisperResult or str
        Another transcription result as an instance of :class:`stable_whisper.result.WhisperResult` or path to the
        JSON of the result.
    min_gap : float, default 0.1
        The minimum seconds of a gap between segments that must be exceeded to be filled in.
    case_sensitive : bool, default False
        Whether to consider the case of the first and last word of the gap to determine overlapping words to remove
        before filling in.
    strip : bool, default True
        Whether to ignore spaces before and after the first and last word of the gap to determine overlapping words
        to remove before filling in.
    ignore_punctuations : bool, default '"',.?!'
        Ending punctuations to ignore in the first and last word of the gap to determine overlapping words to
        remove before filling in.
    verbose:
        Whether to print detail of the filled content.

    Returns
    -------
    stable_whisper.result.WhisperResult
        The current instance after the changes.

Locating Words

There are two ways to locate words. The first way is by approximating time at which the words are spoken then transcribing a few seconds around the approximated time. This also the faster way for locating words.

matches = model.locate('audio.mp3', 'are', language='en', count=0)
for match in matches:
    print(match.to_display_str())
# verbose=True does the same thing as this for-loop.

Docstring:

locate()
Locate when specific words are spoken in ``audio`` without fully transcribing.

This is usefully for quickly finding at what time the specify words or phrases are spoken in an audio. Since it
does not need to transcribe the audio to approximate the time, it is significantly faster transcribing then
locating the word in the transcript.

It can also transcribe few seconds around the approximated time to find out what was said around those words or
confirm if the word was even spoken near that time.

Parameters
----------
model : whisper.model.Whisper
    An instance of Whisper ASR model.
audio : str or numpy.ndarray or torch.Tensor or bytes
    Path/URL to the audio file, the audio waveform, or bytes of audio file.
    If audio is :class:`numpy.ndarray` or :class:`torch.Tensor`, the audio must be already at sampled to 16kHz.
text: str or list of int
    Words/phrase or list of tokens to search for in ``audio``.
language : str
    Language of the ``text``.
count : int, default 1, meaning stop search after 1 match
    Number of matches to find. Use 0 to look for all.
duration_window : float or tuple of (float, float), default 3.0, same as (3.0, 3.0)
    Seconds before and after the end timestamp approximations to transcribe after mode 1.
    If tuple pair of values, then the 1st value will be seconds before the end and 2nd value will be seconds after.
mode : int, default 0
    Mode of search.
    2, Approximates the end timestamp of ``text`` in the audio. This mode does not confirm whether ``text`` is
        spoken at the timestamp
    1, Completes mode 2 then transcribes audio within ``duration_window`` to confirm whether `text` is a match at
        the approximated timestamp by checking if ``text`` at that ``duration_window`` is within
        ``probability_threshold`` or matching the string content if ``text`` with the transcribed text at the
        ``duration_window``.
    0, Completes mode 1 then add word timestamps to the transcriptions of each match.
    Modes from fastest to slowest: 2, 1, 0
start : float, optional, meaning it starts from 0s
    Seconds into the audio to start searching for ``text``.
end : float, optional
    Seconds into the audio to stop searching for ``text``.
probability_threshold : float, default 0.5
    Minimum probability of each token in ``text`` for it to be considered a match.
eots : int, default 1
    Number of EOTs to reach before stopping transcription at mode 1. When transcription reach a EOT, it usually
    means the end of the segment or audio. Once ``text`` is found in the ``duration_window``, the transcription
    will stop immediately upon reaching a EOT.
max_token_per_seg : int, default 20
    Maximum number of tokens to transcribe in the ``duration_window`` before stopping.
exact_token : bool, default False
    Whether to find a match base on the exact tokens that make up ``text``.
case_sensitive : bool, default False
    Whether to consider the case of ``text`` when matching in string content.
verbose : bool or None, default False
    Whether to display the text being decoded to the console.
    Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.
initial_prompt : str, optional
    Text to provide as a prompt for the first window. This can be used to provide, or
    "prompt-engineer" a context for transcription, e.g. custom vocabularies or proper nouns
    to make it more likely to predict those word correctly.
suppress_tokens : str or list of int, default '-1', meaning suppress special characters except common punctuations
    List of tokens to suppress.
denoiser : str, optional
    String of the denoiser to use for preprocessing ``audio``.
    See ``stable_whisper.audio.SUPPORTED_DENOISERS`` for supported denoisers.
denoiser_options : dict, optional
    Options to use for ``denoiser``.
only_voice_freq : bool, default False
    Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.

Returns
-------
stable_whisper.result.Segment or list of dict or list of float
    Mode 0, list of instances of :class:`stable_whisper.result.Segment`.
    Mode 1, list of dictionaries with end timestamp approximation of matches and transcribed neighboring words.
    Mode 2, list of timestamps in seconds for each end timestamp approximation.

Notes
-----
For ``text``, the case and spacing matters as 'on', ' on', ' On' are different tokens, therefore chose the one that
best suits the context (e.g. ' On' to look for it at the beginning of a sentence).

Use a sufficiently large first value of ``duration_window`` i.e. the value > time it is expected to speak ``text``.

If ``exact_token = False`` and the string content matches, then ``probability_threshold`` is not used.

Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> matches = model.locate('audio.mp3', 'are', language='English', verbose=True)

Some words can sound the same but have different spellings to increase of the chance of finding such words use
``initial_prompt``.

>>> matches = model.locate('audio.mp3', ' Nickie', 'English', verbose=True, initial_prompt='Nickie')
CLI
stable-ts audio.mp3 --locate "are" --language en -to "count=0"

The second way allows you to locate words with regular expression, but it requires the audio to be fully transcribed first.

result = model.transcribe('audio.mp3')
# Find every sentence that contains "and"
matches = result.find(r'[^.]+and[^.]+\.')
# print the all matches if there are any
for match in matches:
  print(f'match: {match.text_match}\n'
        f'text: {match.text}\n'
        f'start: {match.start}\n'
        f'end: {match.end}\n')
  
# Find the word before and after "and" in the matches
matches = matches.find(r'\s\S+\sand\s\S+')
for match in matches:
  print(f'match: {match.text_match}\n'
        f'text: {match.text}\n'
        f'start: {match.start}\n'
        f'end: {match.end}\n')

Docstring:

find()
    Find segments/words and timestamps with regular expression.

    Parameters
    ----------
    pattern : str
        RegEx pattern to search for.
    word_level : bool, default True
        Whether to search at word-level.
    flags : optional
        RegEx flags.

    Returns
    -------
    stable_whisper.result.WhisperResultMatches
        An instance of :class:`stable_whisper.result.WhisperResultMatches` with word/segment that match ``pattern``.

Silence Suppression

While the timestamps predicted by Whisper are generally accurate, it sometimes predicts the start of a word way before the word is spoken or the end of a word long after the word has been spoken. This is where "silence suppression" helps. It is enabled by default (suppress_silence=True). The idea is to adjust the timestamps based on the timestamps of non-speech portions of the audio. silence_suppresion0 Note: In 1.X, "silence suppression" refers to the process of suppressing timestamp tokens of the silent portions during inference, but changed to post-inference timestamp adjustments in 2.X, which allows stable-ts to be used with other ASR models. The timestamp token suppression feature is disabled by default, but can still be enabled with suppress_ts_tokens=True.

By default, stable-ts determines the non-speech timestamps based on how loud a section of the audio is relative to the neighboring sections. This method is most effective for cases, where the speech is significantly louder than the background noise. The other method is to use Silero VAD (enabled with vad=True). To visualize the differences between non-VAD and VAD, see Visualizing Suppression.

Besides the parameters for non-speech detection sensitivity (see Visualizing Suppression), the following parameters are used to combat inaccurate non-speech detection.
min_word_dur is the shortest duration each word is allowed from adjustments.
nonspeech_error is the relative error of the non-speech that appears in between a word.
use_word_position is whether to use word position in segment to determine whether to keep end or start timestamps Note: nonspeech_error was not available before 2.14.0; use_word_position was not available before 2.14.2; min_word_dur prevented any adjustments that resulted in word duration shorter than min_word_dur.

For the following example, min_word_dur=0.5 (default: 0.1) and nonspeech_error=0.3 (default: 0.3). silence_suppresion1 nonspeech_error=0.3 allows each non-speech section to be treated 1.3 times their actual duration. Either from the start of the corresponding word to the end of the non-speech or from the start of the non-speech to the end of the corresponding word. In the case that both conditions are met, the shorter one is used. Or if both are equal, then the start of the non-speech to the end of the word is used.
The second non-speech from 1.375s to 1.75s is ignored for 'world.' because it failed both conditions.
The first word, 'Hello', satisfies only the former condition from 0s to 0.625, thus the new start for 'Hello' would be 0.625s. However, min_word_dur=0.5 requires the resultant duration to be at least 0.5s. As a result, the start of 'Hello' is changed to 0.375s instead of 0.625s. Furthermore, the default setting, use_word_position=True, also ensures the start is adjusted for the first word and the end is adjusted for the last word of the segment as long as one of the conditions is true.

Tips

  • do not disable word timestamps with word_timestamps=False for reliable segment timestamps
  • use vad=True for more accurate non-speech detection
  • use denoiser="demucs" to isolate vocals with Demucs; it is also effective at isolating vocals even if there is no music
  • use denoiser="demucs" and vad=True for music
  • set same seed for each transcription (e.g. random.seed(0)) for denoiser="demucs" to produce deterministic outputs
  • to enable dynamic quantization for inference on CPU use --dq true for CLI or dq=True for stable_whisper.load_model
  • use encode_video_comparison() to encode multiple transcripts into one video for synced comparison; see Encode Comparison
  • use visualize_suppression() to visualize the differences between non-VAD and VAD options; see Visualizing Suppression
  • refinement can an effective (but slow) alternative for polishing timestamps if silence suppression isn't effective
  • use --persist/-p for CLI to keep the CLI running without reloading the same model after it finishes executing its commands

Visualizing Suppression

You can visualize which parts of the audio will likely be suppressed (i.e. marked as silent). Requires: Pillow or opencv-python.

Without VAD

import stable_whisper
# regions on the waveform colored red are where it will likely be suppressed and marked as silent
# [q_levels]=20 and [k_size]=5 (default)
stable_whisper.visualize_suppression('audio.mp3', 'image.png', q_levels=20, k_size = 5) 

novad

# [vad_threshold]=0.35 (default)
stable_whisper.visualize_suppression('audio.mp3', 'image.png', vad=True, vad_threshold=0.35)

vad Docstring:

visualize_suppression()
Visualize regions on the waveform of ``audio`` detected as silent.

Regions on the waveform colored red are detected as silent.

Parameters
----------
audio : str or numpy.ndarray or torch.Tensor or bytes
    Path/URL to the audio file, the audio waveform, or bytes of audio file.
    If audio is ``numpy.ndarray`` or ``torch.Tensor``, the audio must be already at sampled to 16kHz.
output : str, default None, meaning image will be shown directly via Pillow or opencv-python
    Path to save visualization.
q_levels : int, default 20
    Quantization levels for generating timestamp suppression mask; ignored if ``vad = true``.
    Acts as a threshold to marking sound as silent.
    Fewer levels will increase the threshold of volume at which to mark a sound as silent.
k_size : int, default 5
    Kernel size for avg-pooling waveform to generate timestamp suppression mask; ignored if ``vad = true``.
    Recommend 5 or 3; higher sizes will reduce detection of silence.
vad : bool or dict, default False
    Whether to use Silero VAD to generate timestamp suppression mask.
    Instead of ``True``, using a dict of keyword arguments will load the VAD with the arguments.
    Silero VAD requires PyTorch 1.12.0+. Official repo, https://github.com/snakers4/silero-vad.
vad_threshold : float, default 0.35
    Threshold for detecting speech with Silero VAD. Low threshold reduces false positives for silence detection.
max_width : int, default 1500
    Maximum width of visualization to avoid overly large image from long audio.
    Each unit of pixel is equivalent  to 1 token.  Use -1 to visualize the entire audio track.
height : int, default 200
    Height of visualization.

Encode Comparison

You can encode videos similar to the ones in the doc for comparing transcriptions of the same audio.

stable_whisper.encode_video_comparison(
    'audio.mp3', 
    ['audio_sub1.srt', 'audio_sub2.srt'], 
    output_videopath='audio.mp4', 
    labels=['Example 1', 'Example 2']
)

Docstring:

encode_video_comparison()
Encode multiple subtitle files into one video with the subtitles vertically stacked.

Parameters
----------
audiofile : str
    Path of audio file.
subtitle_files : list of str
    List of paths for subtitle file.
output_videopath : str, optional
    Output video path.
labels : list of str, default, None, meaning use ``subtitle_files`` as labels
    List of labels for ``subtitle_files``.
height : int, default 90
    Height for each subtitle section.
width : int, default 720
    Width for each subtitle section.
color : str, default 'black'
    Background color of the video.
fontsize: int, default 70
    Font size for subtitles.
border_color : str, default 'white'
    Border color for separating the sections of subtitle.
label_color : str, default 'white'
    Color of labels.
label_size : int, default 14
    Font size of labels.
fps : int, default 25
    Frame-rate of the video.
video_codec : str, optional
    Video codec opf the video.
audio_codec : str, optional
    Audio codec opf the video.
overwrite : bool, default False
    Whether to overwrite existing video files with the same path as the output video.
only_cmd : bool, default False
    Whether to skip encoding and only return the full command generate from the specified options.
verbose : bool, default True
    Whether to display ffmpeg processing info.

Returns
-------
str or None
    Encoding command as a string if ``only_cmd = True``.

Multiple Files with CLI

Transcribe multiple audio files then process the results directly into SRT files.

stable-ts audio1.mp3 audio2.mp3 audio3.mp3 -o audio1.srt audio2.srt audio3.srt

Any ASR

You can use most of the features of Stable-ts improve the results of any ASR model/APIs. Just follow this notebook.

Quick 1.X → 2.X Guide

What's new in 2.0.0?

  • updated to use Whisper's more reliable word-level timestamps method.
  • the more reliable word timestamps allow regrouping all words into segments with more natural boundaries.
  • can now suppress silence with Silero VAD (requires PyTorch 1.12.0+)
  • non-VAD silence suppression is also more robust

Usage changes

  • results_to_sentence_srt(result, 'audio.srt')result.to_srt_vtt('audio.srt', word_level=False)
  • results_to_word_srt(result, 'audio.srt')result.to_srt_vtt('output.srt', segment_level=False)
  • results_to_sentence_word_ass(result, 'audio.srt')result.to_ass('output.ass')
  • there's no need to stabilize segments after inference because they're already stabilized during inference
  • transcribe() returns a WhisperResult object which can be converted to dict with .to_dict(). e.g result.to_dict()

License

This project is licensed under the MIT License - see the LICENSE file for details

Acknowledgments

Includes slight modification of the original work: Whisper

stable-ts's People

Contributors

charly24 avatar emiliobarradas avatar emiliskiskis avatar erdembocugoz avatar eschmidbauer avatar george0828zhang avatar jerome-labonte-udem avatar jianfch avatar jorianwoltjer avatar mcclouds avatar navalnica avatar shaishaicookie avatar sokoloid avatar trsa993 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stable-ts's Issues

Adds support for FasterWhisper

A suggestion would be to add support for Faster Whisper, which is much faster and uses much less VRAM than Whisper. You can use the Whisper Large V2 model and only use 4.6GB VRAM instead of the original 10GB. If you can add this, it will be very useful for the community.

No more live debug output

Hey,

With stable-ts's modified model and at least the results_to_sentence_srt function, I don't see any live messages anymore when using the library with the verbose flag. All the output that would normally be printed to stdout while things are happening, is printed as a single block at the end.

I'm guessing that's due to tighten_timestamps evaluating the whole generator? Can we do something about that?

Thanks

ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate

Whisper setup and running properly in my venv. Running on an intel macbook pro 16" with AMD Radeon Pro 5500M 8 GB.

Example code:

import whisper
from stable_whisper import modify_model

model = whisper.load_model('base', 'cuda')
modify_model(model)
# modified model should run just like the regular model but with additional hyperparameters and extra data in results
results = model.transcribe('audio.wav')
stab_segments = results['segments']
first_segment_word_timestamps = stab_segments[0]['whole_word_timestamps']

# or to get token timestamps that adhere more to the top prediction
from stable_whisper import stabilize_timestamps
stab_segments = stabilize_timestamps(results, top_focus=True)
print(stab_segments)

Error

python test.py                                                                                                                             8:56AM
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1350, in do_open
    encode_chunked=req.has_header('Transfer-encoding'))
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1277, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1323, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1272, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1032, in _send_output
    self.send(msg)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 972, in send
    self.connect()
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1447, in connect
    server_hostname=server_hostname)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 423, in wrap_socket
    session=session
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 870, in _create
    self.do_handshake()
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 1139, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "test.py", line 4, in <module>
    model = whisper.load_model('base', 'cuda')
  File "/Users/dangoodman/code/whisper-api/venv/lib/python3.7/site-packages/whisper/__init__.py", line 96, in load_model
    checkpoint_file = _download(_MODELS[name], download_root, in_memory)
  File "/Users/dangoodman/code/whisper-api/venv/lib/python3.7/site-packages/whisper/__init__.py", line 46, in _download
    with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 525, in open
    response = self._open(req, data)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 543, in _open
    '_open', req)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 503, in _call_chain
    result = func(*args)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1393, in https_open
    context=self._context, check_hostname=self._check_hostname)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1352, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)>

Segmentation fault: 11 in conda env

I am running a python 3.9 conda environment with the copy pasta code provided
the segfault occurs on the first line: model = load_model('base')

Get true head & tail timestamps from segment first/last words

I wonder, would it at all be possible to utilise the times of first and last words to get true timestamps when generating subtitles?

Currently, subtitles are gapless (don't start and end respective to dialogue) although I might have seen that it was being worked on over on the whisper repo.

Broken sentences and even words when transcribing

I'm noticing not only sentences being broken apart (which is probably ok), but that words themselves are broken from one timestamp to the next:

00:00:22,280 --> 00:00:23,820
m your

8
00:00:23,820 --> 00:00:26,220
chair for education finance and

9
00:00:26,220 --> 00:00:28,660
will be doing some introductions

10
00:00:28,660 --> 00:00:29,660
and I'

11
00:00:29,660 --> 00:00:32,660
m going to talk about the

12
00:00:32,660 --> 00:00:34,660
microphone. When you are

transcribe not working

Just found this great repo I did a testrun it run in the first test with long audio the test code

import whisper from stable_whisper import modify_model audio = "postaudio-11.mp3" model = whisper.load_model('tiny') modify_model(model) result1 = model.transcribe(audio, language='en', suppress_silence=False, ts_num=16 ,) result2 = model.transcribe(audio, language='en', suppress_silence=True, ts_num=16, lower_quantile=0.05, lower_threshold=0.1) result2 = model.transcribe(audio, language='en', suppress_silence=True, ts_num=16, lower_quantile=0.05, lower_threshold=0.1) print(result1) print(result2)

it worked first time but then it with low lenght audio is not run and giving this error
File "\codes\wisper\test.py", line 11, in
result2 = model.transcribe(audio, language='en', suppress_silence=True, ts_num=16, lower_quantile=0.05, lower_threshold=0.1)
File "
\codes\wisper\stable_whisper.py", line 988, in transcribe_word_level
wf = _load_audio_waveform(audio_for_mask or audio, 100, int(mel.shape[-1] * ts_scale))
File "**\codes\wisper\stable_whisper.py", line 674, in _load_audio_waveform
return np.frombuffer(waveform, dtype=np.uint8).reshape(h, w)
ValueError: cannot reshape array of size 151200 into shape (100,252)`

any help would be appreciated

edit: after further test is found that is not working low lenght audio

Space at start of each line

Hey, thank you for this.

I'm experimenting with getting proper sentences out of Whisper that don't always start and end on a full second.

I've observed that when using:

results_to_sentence_srt(result, srt_path)

Every text line is prepended with a space. Is that intentional or a bug?

66
00:03:18,000 --> 00:03:19,000
 It's been nowhere.

67
00:03:19,000 --> 00:03:20,000
 Right.

Thanks!

Edit: for a quick fix I've changed

f'{sub["text"]}\n'
to f'{sub["text"].strip()}\n'

Suppress_silence and suppress_middle recommended usage

I read through #48 to get a general understanding and also saw:

   parser.add_argument('--suppress_silence', type=str2bool, default=True,
                       help="whether to suppress timestamp where audio is silent at segment-level")
   parser.add_argument('--suppress_middle', type=str2bool, default=True,
                       help="whether to suppress silence only for beginning and ending of segments")

But I am still a little fuzzy on when each should used, and their pros/cons. Could someone post what their experience has been with tweaking the parameters of stable-ts, and when one might be appropriate or not?

assertion failed for empty sectioned_segments

I added 2 print statements to debug:

    segments = deepcopy(segments)
    sectioned_segments: List[List] = [[]]
    for i, seg in enumerate(segments, 1):
        sectioned_segments[-1].append(seg)
        if seg["anchor_point"]:
            if i < len(segments):
                sectioned_segments.append([])

    print(sectioned_segments)
    print(set(len(set(s["offset"] for s in segs)) == 1 for segs in sectioned_segments))
    assert all(
        set(len(set(s["offset"] for s in segs)) == 1 for segs in sectioned_segments)
    )

I got:

[[]]
{False}

I am not sure why sectioned_segments is initialized as [[]]. it seems like it should be [].

Suppressing timestamps in silent regions - is the premise correct?

If I understand the implementation correctly, it suppresses any timestamps that fall within a silent region.

But now consider the following scenario where # indicates speech, and | indicates a candidate timestamp for the start of the first word in the segment:

1  2              3     4
|  |              |     |
                   ######## ### ###### #############                  

The most accurate timestamp candidate is number 3, because it is the closest to the boundary between silence and speech, but it happens to fall on the silent side of that boundary by a very small amount, so the best candidate will unfortunately be filtered out.

Now consider timestamps that occur in the middle of the segment, and consider a scenario where the most accurate candidates happen to fall in these places:

                           |   |      | 
                   ######## ### ###### #############                  

If you filter these timestamps out because they just land on the silent side of the boundary, you will actually get less accurate timestamps. And rather than just switch off suppress_middle, I think these silent gaps should be treated as useful signposts as to where word boundaries are likely to be according to the speech signal.

So I am thinking the premise should be flipped on its head. I would think that the boundaries of these silent gaps should act as attractors for good timestamp candidates. And I would go so far as to say that nearby timestamp candidates should be snapped to the boundaries of these silent regions if they are close enough. Let's say, the larger the silent gap and the closer a timestamp is to the boundary, the stronger should be the attraction of a timestamp to that nearby boundary.

Now, there are some words in various languages where you have a glottal stop in the middle of a word, where silence doesn't actually indicate a word boundary, but in general, the larger that gap is, the more likely it is to indicate a word boundary. That's true of even the very large gap at the start of the segment.

A related consideration here is that you don't want to have multiple words snapping to the same signpost from the same side. Even with the current implementation, there may also be a similar issue where just the raw timestamps that you get out of whisper may sometimes cause multiple words colliding in undesirable ways, so that's still an issue in its own right that is worth looking into. Currently I think it ends up merging those words when it really shouldn't. I have encountered examples where "eat. So" was merged into one word, probably because of inaccurate or overlapping timestamps. And a full stop/period is a perfect example of where you might want to use these silent gaps as sign posts to figure out the most likely timestamp for the beginning of the next sentence rather than discarding this information when a timestamp the closest to the boundary but happens to fall on the silent side of it.

A cheap solution would be to just add some padding to these speech regions, but that padding would also end up losing the signal of some of the smaller silent gaps between some words in the middle of a segment, particularly ones where there is a full stop/period in the middle of the segment.

Encoding issue when outputting results_to_word_srt

I'm new to python but wanted to share this incase anyone else has this issue.
Python Script

import whisper
from stable_ts.stable_whisper import modify_model
from stable_ts.stable_whisper import results_to_word_srt


model = whisper.load_model("medium")
modify_model(model)
results = model.transcribe("test.mp3")
stab_segments = results['segments']
first_segment_word_timestamps = stab_segments[0]['whole_word_timestamps']

results_to_word_srt(results, 'audio.srt', combine_compound=True)

Error:

Traceback (most recent call last):
  File "C:\scripts\whispertest.py", line 12, in <module>
    srt = results_to_word_srt(results, 'audio.srt')
  File "C:\scripts\stable_ts\stable_whisper.py", line 223, in results_to_word_srt
    to_srt(group_word_timestamps(res, combine_compound=combine_compound), srt_path)
  File "C:\scripts\stable_ts\stable_whisper.py", line 109, in to_srt
    f.write(srt_str)
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1776.0_x64__qbz5n2kfra8p0\lib\encodings\cp1252.py", line 19, in encode
    return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u30b5' in position 10570: character maps to <undefined>

Fix stable_whisper.py lines 107-110:

    if save_path:
        with open(save_path, 'w', encoding="utf-8") as f:
            f.write(srt_str)
        print(f'Saved: {os.path.abspath(save_path)}')

Does this support multiple languages?

I'd like to use this for a project where multi-language support is important. I can't use whisper-timestamped because it's Affero GPL. But it points out some risks that concern me with WhisperX:

  • The need to find one wav2vec model per language to support.
  • The need to normalize characters in whisper transcription to match the character set of wav2vec model. This involves awkward language-dependent conversions, like converting numbers to words ("2" -> "two"), symbols to words ("%" -> "percent", "€" -> "euro(s)")...
  • The lack of robustness around speech disfluencies (fillers, hesitations, repeated words...) that are usually removed by Whisper.

So I'm inclined to try your project. Thank you for the MIT license choice. Should I expect your project does not suffer from the limitations listed above for WhisperX?

Also, if you can, are there any pros / cons to using your project vs word_level_ts?

.ass with the same content as an .srt file

Hello,

Thank you very much for implementing this in stable-ts. Maybe I don't know how to use it correctly.

I am using this script to generate a .ass file.

However, I am getting an .ass file with the same content as an .srt file.

Script:

import whisper
from stable_whisper import modify_model

model = whisper.load_model('medium')
modify_model(model)
# modified model should run just like the regular model but with additional hyperparameters and extra data in results
results = model.transcribe('test.wav', language="pt")
stab_segments = results['segments']
first_segment_word_timestamps = stab_segments[0]['whole_word_timestamps']

# or to get token timestamps that adhere more to the top prediction
from stable_whisper import stabilize_timestamps
stab_segments = stabilize_timestamps(results, top_focus=True)

# word-level 
from stable_whisper import results_to_word_srt
# after you get results from modified model
# this treats a word timestamp as end time of the word
# and combines words if their timestamps overlap
results_to_word_srt(results, 'audio.ass')  #combine_compound=True if compound words are separate

How should I do to get a correct .ass file?

Thank you very much in advance. I await your response!

NameError: name 'stab_segments' is not defined

hello I tried to use your script I have this error, whisper works well

import whisper
from stable_whisper import modify_model

model = whisper.load_model('base')
modify_model(model)
results = model.transcribe('G:\ml3470-720p(1)_1_A_extracted.aac')
Detected language: english
esults['segments']
Traceback (most recent call last):
File "", line 1, in
NameError: name 'esults' is not defined
first_segment_word_timestamps = stab_segments[0]['whole_word_timestamps']
Traceback (most recent call last):
File "", line 1, in
NameError: name 'stab_segments' is not defined

from stable_whisper import stabilize_timestamps
stab_segments = stabilize_timestamps(results, top_focus=True)

Best-practice subtitle typesetting

Hey,

I love the recent improvements to the codebase. One thing that's been nagging me is how the resulting transcriptions are somewhat unpredictable. They're more predictable with stable-ts in use, but the underlying transcribe model might still mash conversations into a "speech-style" transcription.

I'd like to propose a new flag/mode/method for stable-ts: best_practice_subtitles

That'd mean:

  • "concatenate" words into sentences with <200ms timing difference
  • split timestamp differences >400ms into separate SRT entries
  • break lines longer than 43 characters (without end-of-sentence punctuation), adding a line-break (<br />)
  • a maximum of two lines per SRT entry
  • move overflow-words to next SRT entry
  • break lines after punctuation

This would be the basis for a simple "industry-standard"-like subtitle typesetting, which would be absolutely amazing to have.

If I can find the time I'll try and implement this myself, but you're much more experienced with whisper and the token setup than me.

This looks neat but I'm not groking how to use it

So your install instructions work. However, I'm not understanding how I can use it. For Whisper it's an easy 'whisper file_name.mp4 --model medium' or what have you.

Do I need to make a script with your inputs and then pass the file name as a parameter when I call your script from the terminal?

add progressbar

add progress bar or add (optional) beep sound notification when process finished

Install via Pip

Hello,

What are your opinions on making this able to be installed as a Pip package either via PyPi or just from the Git repository? It would make tracking the version and using it inside of other programs a lot easier.

I could PR the setup.py file to make it installable via Pip if you'd like, but you would have to (optionally) set it up with Pypi.

How can I enable the multithreading to speed up?

Hello, thanks for sharing,
I am running the script like the info says:

There's any way to enable more cores/threads to make it faster?

Also, is that code well or could be improved?

import whisper
from stable_whisper import modify_model

model = whisper.load_model('medium')
modify_model(model)
# modified model should run just like the regular model but with additional hyperparameters and extra data in results
results = model.transcribe('myAudio.wav')
stab_segments = results['segments']
first_segment_word_timestamps = stab_segments[0]['whole_word_timestamps']

# or to get token timestamps that adhere more to the top prediction
from stable_whisper import stabilize_timestamps
stab_segments = stabilize_timestamps(results, top_focus=True)

# word-level 
from stable_whisper import results_to_word_srt
# after you get results from modified model
# this treats a word timestamp as end time of the word
# and combines words if their timestamps overlap
results_to_word_srt(results, 'myAudio.srt')  #combine_compound=True if compound words are separate

Subtitles video

Hey!

Just wondering, how did you turn the .ass file into the mp4 where each word gets highlighted as it is spoken? like in the readme examples?

Thought on using pywhisper

Hi, someone packaged whisper on pypi. The code is exactly the same except he uses moviepy instead of ffmpeg-python which removes the dependency on ffmpeg. I think using it would benefit the project by making it much easier to install (e.g. in poetry)

Uninformative ffmpeg error for ultra-short waveforms

Hi, I encountered this error when transcribing an audio file:

Failed to load audio in waveform: ffmpeg version 4.2.7-0ubuntu0.1 Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)
  configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Guessed Channel Layout for Input Stream #0.0 : mono
Input #0, s16le, from 'pipe:':
  Duration: N/A, bitrate: 705 kb/s
    Stream #0:0: Audio: pcm_s16le, 44100 Hz, mono, s16, 705 kb/s
[showwavespic @ 0x5577cb81ee00] Unable to parse option value "0x100" as image size
    Last message repeated 1 times
[showwavespic @ 0x5577cb81ee00] Error setting option s to value 0x100.
[Parsed_showwavespic_3 @ 0x5577cb81ed00] Error applying options to the filter.
[AVFilterGraph @ 0x5577cb81a5c0] Error initializing filter 'showwavespic' with args 's=0x100'
Error initializing complex filters.
Invalid argument

It seems like ffmpeg is trying to load a waveform with width 0. I think this could be due to rounding in transcribe_word_level:

wfh, wfw = 100, int(mel.shape[-1] * ts_scale)
wf = load_audio_waveform_img(audio_for_mask or audio, wfh, wfw, ignore_shift=ignore_shift)

Here, wfw is rounded to 0. Perhaps it would be nice to add a check for this and throw a more informative error?

Punctuation marks have non-zero duration

I have encountered punctuation symbols in Japanese that sometimes get assigned durations of multiple seconds, squashing the timestamps of other words in the segment. I'm not sure if it's just borrowing time from the immediately preceding word or whether all the other words are being proportionately squashed. Note that this observation is based on the large model since punctuation doesn't show up on the smaller models in Japanese.

TypeError: topk(): argument 'k' (position 2) must be int, not NoneType

With every audio file I try to transcribe this is what I get:

Traceback (most recent call last):
  File "w.py", line 9, in <module>
    print(whisper.transcribe(model = model, audio = "test.mp4"))
  File "/usr/local/lib/python3.8/dist-packages/whisper/transcribe.py", line 181, in transcribe
    result: DecodingResult = decode_with_fallback(segment)
  File "/usr/local/lib/python3.8/dist-packages/whisper/transcribe.py", line 117, in decode_with_fallback
    decode_result = model.decode(segment, options)
  File "/home/lsowa/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/lsowa/stable_whisper.py", line 1471, in decode_word_level
    result, ts = DecodingTaskWordLevel(model, options,
  File "/home/lsowa/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/lsowa/stable_whisper.py", line 1383, in run
    tokens, sum_logprobs, no_speech_probs = self._main_loop(audio_features, tokens)
  File "/home/lsowa/stable_whisper.py", line 1343, in _main_loop
    ts = _ts_topk(ts_logits, k=self.ts_num, prev_ts=self.decoder.ts)
  File "/home/lsowa/stable_whisper.py", line 1135, in _ts_topk
    temp_ts = torch.stack(torch.topk(ts_logits, k, dim=-1), 0).unsqueeze(-2)
TypeError: topk(): argument 'k' (position 2) must be int, not NoneType

Any help would be highly appreciated!

whole_word_timestamps not working as expected when using mono-lingual model (model_name='*.en')

when using mono-linugual whisper model some subword tokens are not combined into single word. for example instead of getting word I'm we get two separate words: I and 'm

code to demonstrate the issue:

from pprint import pformat

import whisper
import stable_whisper

def print_whole_word_timestamps(audio_fp: str, whisper_model_name: str, language: str = None):
  whisper_model = whisper.load_model(name=whisper_model_name)
  stable_whisper.modify_model(whisper_model)
  transcription = whisper_model.transcribe(audio_fp, language=language, verbose=False)
  segments = transcription['segments']

  print(f'whole_word_timestamps:')
  for ix, s in enumerate(segments):
    print(f'ix={ix}'.center(50, '-'))
    print(f'{pformat(s["whole_word_timestamps"])}')

when using multi-lingual model all works fine (e.g. words don't, we're are present in results):

print_whole_word_timestamps(audio_fp=fp, whisper_model_name='base', language='en')

output:

whole_word_timestamps:
-----------------------ix=0-----------------------
[{'timestamp': 0.8399999737739563, 'word': ' I'},
 {'timestamp': 0.929999977350235, 'word': " don't"},
 {'timestamp': 0.9399999976158142, 'word': ' find'},
 {'timestamp': 1.0, 'word': ' how'},
 {'timestamp': 1.059999942779541, 'word': ' are'},
 {'timestamp': 1.959999918937683, 'word': ' you.'}]
-----------------------ix=1-----------------------
[{'timestamp': 2.5, 'word': ' And'},
 {'timestamp': 2.75, 'word': ' when'},
 {'timestamp': 2.950000047683716, 'word': " we're"},
 {'timestamp': 3.0, 'word': ' doing'},
 {'timestamp': 3.0, 'word': ' this,'},
 {'timestamp': 3.0, 'word': ' so'},
 {'timestamp': 3.75, 'word': ' we...'}]

but when we use mono-lingual model (model_name = '*.en') subword tokens are not combined into single words (e.g. tokens I, 'm and we, 're are not combined into single words):

print_whole_word_timestamps(audio_fp=fp, whisper_model_name='base.en', language='en')

output:

whole_word_timestamps:
-----------------------ix=0-----------------------
[{'timestamp': 0.42000000178813934, 'word': ' I'},
 {'timestamp': 0.9099999964237213, 'word': "'m"},
 {'timestamp': 1.2699999511241913, 'word': ' fine'},
 {'timestamp': 1.2699999511241913, 'word': ','},
 {'timestamp': 1.35999995470047, 'word': ' how'},
 {'timestamp': 1.509999930858612, 'word': ' are'},
 {'timestamp': 1.5399999618530273, 'word': ' you'},
 {'timestamp': 1.6699999570846558, 'word': '?'}]
-----------------------ix=1-----------------------
[{'timestamp': 3.0, 'word': ' And'},
 {'timestamp': 4.0, 'word': ' when'},
 {'timestamp': 4.0, 'word': ' we'},
 {'timestamp': 4.0, 'word': "'re"},
 {'timestamp': 4.0, 'word': ' doing'},
 {'timestamp': 4.0, 'word': ' this'},
 {'timestamp': 4.0, 'word': ','},
 {'timestamp': 4.4, 'word': ' so'},
 {'timestamp': 4.4, 'word': ' we'}]

How are segments created?

Hi! How are the time segments created in stab_segments = results['segments']? Is it done just by measuring the silence threshold (eg here)?

A method to force language

Is there a way to force language? For some reason, it keeps thinking some of my videos are Indonesian, even though they are English, so they all get auto-translated. I tried using the .en model but it throws an exception relating to not being able to perform lang ID. Is there any way to force language instead of relying on the auto-detection?

doubt and improvement suggestion in stable-ts

Hello, I am using stable-ts, and it is helping me a lot!

However, I would like to know if you have any idea if there is any tool or script that can do the same as whisper.cpp does? It can create "karaoke-style subtitles", but I have not yet been able to find out what the developer used to make it work, could you give any light on this? or would you intend to create something that does this?

199337465-dbee4b5e-9aeb-48a3-b1c6-323ac4db5b2c.mp4

Import stable_whisper in setup.py

Hi I am really excited about stable-ts and would like to use it in one of my projects. However, when I install it in a new virtual environment via pip install stable-ts, it throws an error that it does not find certain modules such as numpy. Here is the error message:

Collecting stable-ts==1.0.1
  Downloading stable-ts-1.0.1.tar.gz (22 kB)
  Preparing metadata (setup.py): started
  Preparing metadata (setup.py): finished with status 'error'
  error: subprocess-exited-with-error
  
  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [10 lines of output]
      Traceback (most recent call last):
        File "<string>", line 36, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "/tmp/pip-install-8xk2cxxf/stable-ts_c0021f08c60[54](https://github.com/mexca/mexca/actions/runs/3592887840/jobs/6049146494#step:6:55)23eade0d346669fca0a/setup.py", line 2, in <module>
          import stable_whisper
        File "/tmp/pip-install-8xk2cxxf/stable-ts_c0021f08c[60](https://github.com/mexca/mexca/actions/runs/3592887840/jobs/6049146494#step:6:61)5423eade0d34[66](https://github.com/mexca/mexca/actions/runs/3592887840/jobs/6049146494#step:6:67)[69](https://github.com/mexca/mexca/actions/runs/3592887840/jobs/6049146494#step:6:70)fca0a/stable_whisper/__init__.py", line 1, in <module>
          from .stabilization import *
        File "/tmp/pip-install-8xk2cxxf/stable-ts_c0021f08c605423eade0d346669fca0a/stable_whisper/stabilization.py", line 5, in <module>
          import numpy as np
      ModuleNotFoundError: No module named 'numpy'
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

I looked into setup.py and saw that it already imports stable_whisper before running setup(), which I think leads to the issue. Is it necessary to import the package in this file already?

Guidance running script

Hello, very interested in the project, I would like to run the script. Supposedly there are some parameters, but I don't know how to add them.
Running the script without parameters gives blank output.
I also tried running the blocks of code as files, the first downloads model files and then errors at line 7: list indices must be integers or slices, not str
How should one go about running this? I'm using windows

condition_on_previous_text option missing

Hey,

The signature of the modified models' transcribe method seems to change with stable-ts, as it won't allow passing through the Whisper option condition_on_previous_text.

Thanks

Out of Memory Errors with ~13GB of ram free.

19 hour file around 1GB in size results in killed for OOM error. I'm running with 13GB available.
image

It happens when I run with this command. It works fine for a smaller input mp3 & whisper and whisperX both manage to run this without OOM errors.
stable-ts "$FOLDER/audio.mp3" --language Japanese --output_dir "$FOLDER/" --model large-v2 -o "$FOLDER/captions.ass"

Is there any fixes that could be or workarounds available? I'm guessing I could use a less accurate model (though I was hoping not to).

Update: I also tried it with 20GB available & --model medium set. It resulted in the same thing

Python 3.8 syntax incompatible with Python 3.7

Hi, it seems like the package is using syntax that is only available in Python 3.8 while it specifies in the setup that it can be used with Python 3.7. This leads to this error:

/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/stable_whisper/__init__.py:3: in <module>
    from .whisper_word_level import *
E     File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/stable_whisper/whisper_word_level.py", line 827
E       if (increment := args.pop("temperature_increment_on_fallback")) is not None:
E                     ^
E   SyntaxError: invalid syntax
```

Words being split into multiple words

First of all, thank you @jianfch for this helpful library. Upon trying it however I discovered that some words get split into multiple words when generating word-level timestamps. Here is an example of the output that I get:

[{'id': 0, 'seek': 0, 'offset': 0.0, 'start': 0.0, 'end': 1.2, 'text': " Domino's, we never had pizza,", 'tokens': [50364, 16674, 2982, 311, 11, 321, 1128, 632, 8298, 11, 50424, 50424, 570, 452, 1823, 42544, 848, 43620, 3212, 380, 957, 2418, 561, 13, 50556, 50600, 6962, 322, 13, 31942, 1177, 380, 754, 652, 702, 1065, 8298, 13, 50748, 50748, 634, 445, 9470, 309, 490, 16674, 2982, 311, 293, 19458, 264, 9002, 30, 50900, 50944, 634, 534, 307, 39518, 13, 51016, 51016, 467, 311, 406, 39518, 13, 467, 311, 6631, 13, 51152, 51180, 13893, 278, 9002, 5497, 257, 688, 295, 1460, 13, 51284, 51312, 1033, 13, 1012, 360, 291, 458, 30, 51416, 51416, 286, 643, 281, 7081, 316, 495, 3116, 13, 51504, 51532, 407, 286, 528, 281, 5374, 257, 3116, 1772, 490, 3533, 13, 51712, 51712], 'temperature': 0.0, 'avg_logprob': -0.23187549297626203, 'compression_ratio': 1.5327868852459017, 'no_speech_prob': 0.15484802424907684, 'alt_start_timestamps': [0.0, 0.07999999821186066, 0.03999999910593033, 0.1599999964237213, 0.11999999731779099], 'start_ts_logits': [7.303882598876953, 2.3289690017700195, 2.2197766304016113, 1.9840058088302612, 1.9211819171905518], 'alt_end_timestamps': [1.1999999284744263, 1.2400000095367432, 1.2799999713897705, 1.159999966621399, 1.2599999904632568], 'end_ts_logits': [5.19040584564209, 5.144963264465332, 4.990237236022949, 4.856586456298828, 4.638669013977051], 'unstable_word_timestamps': [{'word': ' Dom', 'token': 16674, 'timestamps': [0.5999999642372131, 2.119999885559082, 27.760000228881836, 3.0, 2.4800000190734863], 'timestamp_logits': [-3.490553379058838, -3.8302605152130127, -3.882472276687622, -3.9555439949035645, -4.009255409240723]}, {'word': 'ino', 'token': 2982, 'timestamps': [5.159999847412109, 6.37999963760376, 13.15999984741211, 2.379999876022339, 8.720000267028809], 'timestamp_logits': [11.187053680419922, 11.046835899353027, 11.006744384765625, 10.84432601928711, 10.70365047454834]}, {'word': "'s", 'token': 311, 'timestamps': [2.740000009536743, 26.779998779296875, 2.7799999713897705, 1.8799999952316284, 2.679999828338623], 'timestamp_logits': [21.730148315429688, 21.360942840576172, 21.2874755859375, 21.12977409362793, 21.0812931060791]}, {'word': ',', 'token': 11, 'timestamps': [0.5, 2.0, 16.0, 15.839999198913574, 1.0], 'timestamp_logits': [15.577890396118164, 15.340520858764648, 15.049356460571289, 15.040714263916016, 14.978059768676758]}, {'word': ' we', 'token': 321, 'timestamps': [1.0, 0.8399999737739563, 0.9599999785423279, 1.0399999618530273, 1.1999999284744263], 'timestamp_logits': [26.83656883239746, 26.177303314208984, 25.96172523498535, 25.372634887695312, 25.279804229736328]}, {'word': ' never', 'token': 1128, 'timestamps': [18.34000015258789, 18.0, 18.279998779296875, 17.03999900817871, 17.959999084472656], 'timestamp_logits': [31.28858757019043, 31.120506286621094, 30.958106994628906, 30.93805694580078, 30.930950164794922]}, {'word': ' had', 'token': 632, 'timestamps': [3.919999837875366, 4.119999885559082, 18.15999984741211, 4.0, 18.0], 'timestamp_logits': [25.406295776367188, 24.781343460083008, 24.699111938476562, 24.675228118896484, 24.64055633544922]}, {'word': ' pizza', 'token': 8298, 'timestamps': [4.0, 1.0, 3.43999981880188, 3.5, 3.9800000190734863], 'timestamp_logits': [19.01510238647461, 18.888790130615234, 18.33662223815918, 18.25796127319336, 18.0939884185791]}, {'word': ',', 'token': 11, 'timestamps': [1.2799999713897705, 1.2400000095367432, 1.1999999284744263, 1.2599999904632568, 1.159999966621399], 'timestamp_logits': [7.029425144195557, 7.0024824142456055, 6.978431701660156, 6.465051651000977, 6.4595489501953125]}], 'anchor_point': False, 'word_timestamps': [{'word': ' Dom', 'token': 16674, 'timestamp': 0.5999999642372131}, {'word': 'ino', 'token': 2982, 'timestamp': 0.5999999642372131}, {'word': "'s", 'token': 311, 'timestamp': 0.5999999642372131}, {'word': ',', 'token': 11, 'timestamp': 1.0}, {'word': ' we', 'token': 321, 'timestamp': 1.0399999618530273}, {'word': ' never', 'token': 1128, 'timestamp': 1.2}, {'word': ' had', 'token': 632, 'timestamp': 1.2}, {'word': ' pizza', 'token': 8298, 'timestamp': 1.2}, {'word': ',', 'token': 11, 'timestamp': 1.2}], 'whole_word_timestamps': [{'word': " Domino's,", 'timestamp': 0.75}, {'word': ' we', 'timestamp': 0.9999999701976776}, {'word': ' never', 'timestamp': 1.0199999809265137}, {'word': ' had', 'timestamp': 1.0199999809265137}, {'word': ' pizza,', 'timestamp': 1.1799999475479126}]},

As you can see the word "Domino's" gets split into "Dom", "ino" and "'s" which I assume is not the expected outcome? Any help would be greatly appreciated. Thanks

Sort words into character range

Ability to generate subtitles and fit certain range of characters in a timestamp. word level is too short, while sentence level is too long. so for example define the max characters for one timestamp and try to fit words into that
rangeimage

like this, maybe theres already a way to cut sentences into shorter phrases?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.