GithubHelp home page GithubHelp logo

bnsantoso / sub-to-audio Goto Github PK

View Code? Open in Web Editor NEW
88.0 5.0 9.0 102 KB

Subtitle to audio, generate audio from any subtitle file using Coqui-ai TTS and synchronize the audio timing according to subtitle time.

Home Page: https://pypi.org/project/subtoaudio/

License: Mozilla Public License 2.0

Python 70.66% Jupyter Notebook 29.34%
subtitle-conversion subtitle-to-speech subtitle-to-voice text-to-audio text-to-speech python tts subtitle-to-audio audio-processing

sub-to-audio's People

Contributors

bnsantoso avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

sub-to-audio's Issues

support for piper-tts?

Hello, thanks for the newest bark and tortoise support!
I've tried piper tts and found that it's pretty easy to use and install. Is there any plan to support it?

Can't get output file

Hello!

I use this code:

from subtoaudio import SubToAudio

sub = SubToAudio(model_name="tts_models/multilingual/multi-dataset/xtts_v2")
subtitle = sub.subtitle("texts/1-1.srt")
sub.convert_to_audio(sub_data=subtitle, output_path="subtitle3.wav",  language="ru")

and get this error:

(virtual) F:\whisper>python tts.py

tts_models/multilingual/multi-dataset/xtts_v2 is already downloaded.
Using model: xtts
ffmpeg version 2023-11-28-git-47e214245b-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
configuration: --enable-gpl --enable-version3 --enable-static --pkg-config=pkgconf --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-dxva2 --enable-d3d11va --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
libavutil 58. 32.100 / 58. 32.100
libavcodec 60. 35.100 / 60. 35.100
libavformat 60. 18.100 / 60. 18.100
libavdevice 60. 4.100 / 60. 4.100
libavfilter 9. 14.100 / 9. 14.100
libswscale 7. 6.100 / 7. 6.100
libswresample 4. 13.100 / 4. 13.100
libpostproc 57. 4.100 / 57. 4.100
Input #0, srt, from 'texts/1-1.srt':
Duration: N/A, bitrate: N/A
Stream #0:0: Subtitle: subrip
Output #0, srt, to 'C:\Users\idres\AppData\Local\Temp\tmpnyfz59xi.srt':
Metadata:
encoder : Lavf60.18.100
Stream #0:0: Subtitle: subrip
Metadata:
encoder : Lavc60.35.100 srt
Stream mapping:
Stream #0:0 -> #0:0 (subrip (srt) -> subrip (srt))
Press [q] to stop, [?] for help
[out#0/srt @ 000001c14f583480] video:0kB audio:0kB subtitle:1kB other streams:0kB global headers:0kB muxing overhead: 28.727885%
size= 1kB time=00:00:36.96 bitrate= 0.3kbits/s speed=5.14e+04x
Temporary folder: C:\Users\idres\AppData\Local\Temp\tmppqcheu9m
Text splitted to sentences.
['Привет всем, сегодня мы рассмотрим Warhammer 40000 Rogue Traider']
Traceback (most recent call last):
File "F:\whisper\tts.py", line 59, in
sub.convert_to_audio(sub_data=subtitle, output_path="subtitle3.wav", language="ru")
File "F:\whisper\virtual\lib\site-packages\subtoaudio\subtoaudio.py", line 120, in convert_to_audio
tts_method(f"{entry_data['text']}",file_path=audio_path,**convert_param,**kwargs)
File "F:\whisper\virtual\lib\site-packages\TTS\api.py", line 432, in tts_to_file
wav = self.tts(
File "F:\whisper\virtual\lib\site-packages\TTS\api.py", line 364, in tts
wav = self.synthesizer.tts(
File "F:\whisper\virtual\lib\site-packages\TTS\utils\synthesizer.py", line 383, in tts
outputs = self.tts_model.synthesize(
File "F:\whisper\virtual\lib\site-packages\TTS\tts\models\xtts.py", line 397, in synthesize
return self.inference_with_config(text, config, ref_audio_path=speaker_wav, language=language, **kwargs)
File "F:\whisper\virtual\lib\site-packages\TTS\tts\models\xtts.py", line 419, in inference_with_config
return self.full_inference(text, ref_audio_path, language, **settings)
File "F:\whisper\virtual\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "F:\whisper\virtual\lib\site-packages\TTS\tts\models\xtts.py", line 480, in full_inference
(gpt_cond_latent, speaker_embedding) = self.get_conditioning_latents(
File "F:\whisper\virtual\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "F:\whisper\virtual\lib\site-packages\TTS\tts\models\xtts.py", line 356, in get_conditioning_latents
audio = load_audio(file_path, load_sr)
File "F:\whisper\virtual\lib\site-packages\TTS\tts\models\xtts.py", line 72, in load_audio
audio, lsr = torchaudio.load(audiopath)
File "F:\whisper\virtual\lib\site-packages\torchaudio_backend\utils.py", line 204, in load
return backend.load(uri, frame_offset, num_frames, normalize, channels_first, format, buffer_size)
File "F:\whisper\virtual\lib\site-packages\torchaudio_backend\soundfile.py", line 27, in load
return soundfile_backend.load(uri, frame_offset, num_frames, normalize, channels_first, format)
File "F:\whisper\virtual\lib\site-packages\torchaudio_backend\soundfile_backend.py", line 221, in load
with soundfile.SoundFile(filepath, "r") as file_:
File "F:\whisper\virtual\lib\site-packages\soundfile.py", line 658, in init
self._file = self._open(file, mode_int, closefd)
File "F:\whisper\virtual\lib\site-packages\soundfile.py", line 1212, in _open
raise TypeError("Invalid file: {0!r}".format(self.name))
TypeError: Invalid file: None

I can't make it rotate

Even though it seems so obvious, but I can't get it to run, and I've been after it for days. Transform .srt subtitles into audio synchronously.
Could you help me make it run in Portuguese Brazil?

voice conversion error ['NoneType' object has no attribute 'float'] on colab

failed to generate audio when using voice conversion using subtoaudio coqui wraper.

logs :

[/usr/local/lib/python3.10/dist-packages/subtoaudio/subtoaudio.py](https://localhost:8080/#) in convert_to_audio(self, sub_data, speaker, language, voice_conversion, speaker_wav, voice_dir, output_path, tempo_mode, tempo_speed, tempo_limit, shift_mode, shift_limit, save_temp, speed, emotion, **kwargs)
    120       for entry_data in data:
    121         audio_path = f"{temp_folder}/{entry_data['audio_name']}"
--> 122         self.apitts.tts_with_vc_to_file(f"{entry_data['text']}",file_path=audio_path,**convert_param)
    123 
    124 

[/usr/local/lib/python3.10/dist-packages/TTS/api.py](https://localhost:8080/#) in tts_with_vc_to_file(self, text, language, speaker_wav, file_path)
    473                 Output file path. Defaults to "output.wav".
    474         """
--> 475         wav = self.tts_with_vc(text=text, language=language, speaker_wav=speaker_wav)
    476         save_wav(wav=wav, path=file_path, sample_rate=self.voice_converter.vc_config.audio.output_sample_rate)

[/usr/local/lib/python3.10/dist-packages/TTS/api.py](https://localhost:8080/#) in tts_with_vc(self, text, language, speaker_wav)
    451         if self.voice_converter is None:
    452             self.load_vc_model_by_name("voice_conversion_models/multilingual/vctk/freevc24")
--> 453         wav = self.voice_converter.voice_conversion(source_wav=fp.name, target_wav=speaker_wav)
    454         return wav
    455 

[/usr/local/lib/python3.10/dist-packages/TTS/utils/synthesizer.py](https://localhost:8080/#) in voice_conversion(self, source_wav, target_wav)
    251 
    252     def voice_conversion(self, source_wav: str, target_wav: str) -> List[int]:
--> 253         output_wav = self.vc_model.voice_conversion(source_wav, target_wav)
    254         return output_wav
    255 

[/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py](https://localhost:8080/#) in decorate_context(*args, **kwargs)
    113     def decorate_context(*args, **kwargs):
    114         with ctx_factory():
--> 115             return func(*args, **kwargs)
    116 
    117     return decorate_context

[/usr/local/lib/python3.10/dist-packages/TTS/vc/models/freevc.py](https://localhost:8080/#) in voice_conversion(self, src, tgt)
    645         """
    646 
--> 647         wav_tgt = self.load_audio(tgt).cpu().numpy()
    648         wav_tgt, _ = librosa.effects.trim(wav_tgt, top_db=20)
    649 

[/usr/local/lib/python3.10/dist-packages/TTS/vc/models/freevc.py](https://localhost:8080/#) in load_audio(self, wav)
    630         if isinstance(wav, list):
    631             wav = torch.from_numpy(np.array(wav)).to(self.device)
--> 632         return wav.float()
    633 
    634     @torch.inference_mode()

AttributeError: 'NoneType' object has no attribute 'float'

Faiseq doesn't work

Code:

from subtoaudio import SubToAudio

sub = SubToAudio(fairseq_language="ru")
subtitle = sub.subtitle("texts/1-1.srt")
sub.convert_to_audio(sub_data=subtitle) 

(virtual) F:\whisper>python tts.py
Traceback (most recent call last):
File "F:\whisper\virtual\lib\site-packages\subtoaudio\subtoaudio.py", line 29, in init
self.apitts = TTS(model_name=model_name, progress_bar=progress_bar, **kwargs).to(device)
File "F:\whisper\virtual\lib\site-packages\TTS\api.py", line 81, in init
self.load_tts_model_by_name(model_name, gpu)
File "F:\whisper\virtual\lib\site-packages\TTS\api.py", line 195, in load_tts_model_by_name
model_path, config_path, vocoder_path, vocoder_config_path, model_dir = self.download_model_by_name(
File "F:\whisper\virtual\lib\site-packages\TTS\api.py", line 149, in download_model_by_name
model_path, config_path, model_item = self.manager.download_model(model_name)
File "F:\whisper\virtual\lib\site-packages\TTS\utils\manage.py", line 407, in download_model
model_item, model_full_name, model, md5sum = self._set_model_item(model_name)
File "F:\whisper\virtual\lib\site-packages\TTS\utils\manage.py", line 326, in _set_model_item
model_full_name = f"{model_type}--{lang}--{dataset}--{model}"
UnboundLocalError: local variable 'dataset' referenced before assignment

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "F:\whisper\tts.py", line 55, in
sub = SubToAudio(fairseq_language="ru")
File "F:\whisper\virtual\lib\site-packages\subtoaudio\subtoaudio.py", line 31, in init
self.apitts = TTS(model_name, progress_bar=progress_bar, **kwargs).to(device)
File "F:\whisper\virtual\lib\site-packages\TTS\api.py", line 81, in init
self.load_tts_model_by_name(model_name, gpu)
File "F:\whisper\virtual\lib\site-packages\TTS\api.py", line 195, in load_tts_model_by_name
model_path, config_path, vocoder_path, vocoder_config_path, model_dir = self.download_model_by_name(
File "F:\whisper\virtual\lib\site-packages\TTS\api.py", line 149, in download_model_by_name
model_path, config_path, model_item = self.manager.download_model(model_name)
File "F:\whisper\virtual\lib\site-packages\TTS\utils\manage.py", line 407, in download_model
model_item, model_full_name, model, md5sum = self._set_model_item(model_name)
File "F:\whisper\virtual\lib\site-packages\TTS\utils\manage.py", line 326, in _set_model_item
model_full_name = f"{model_type}--{lang}--{dataset}--{model}"
UnboundLocalError: local variable 'dataset' referenced before assignment

Include any example

It would be great if you could show an example. For example a YouTube video example showing subtitles and TTS working

option to speed up the voice?

Hey there, thank you for sharing the fantastic script! I was wondering if you could help me find a way to speed up the voices a bit, maybe to around 1.4 times their current reading speed?

The problem is my subtitle has 2 hours duration, however when I tried to read the subtitle with the default model and speed, the final wav would have 3 hours duration and therefore, it can't be used for voice over videos.

Add option to speed up the voices and then join them (not the other way around) could be really helpful. Otherwise, any advice or solution to fine-tune the process would be highly appreciated!

Error when using fairseq models

Hello, when trying to use fairseq model for Vietnamese, I encountered this error. I have no problem running the default model in English

Traceback (most recent call last):
  File "subtoaudio_vie.py", line 2, in <module>
    sub = SubToAudio(model_path="G_100000.pth" , config_path="config.json")
  File "C:\Users\707spacestation\.conda\envs\vitstts\lib\site-packages\subtoaudio\subtoaudio.py", line 33, in __init__
    self.apitts = TTS(model_path=model_path,
  File "C:\Users\707spacestation\AppData\Roaming\Python\Python38\site-packages\TTS\api.py", line 294, in __init__
    self.load_tts_model_by_path(
  File "C:\Users\707spacestation\AppData\Roaming\Python\Python38\site-packages\TTS\api.py", line 417, in load_tts_model_by_path
    self.synthesizer = Synthesizer(
  File "C:\Users\707spacestation\AppData\Roaming\Python\Python38\site-packages\TTS\utils\synthesizer.py", line 91, in __init__
    self._load_tts(tts_checkpoint, tts_config_path, use_cuda)
  File "C:\Users\707spacestation\AppData\Roaming\Python\Python38\site-packages\TTS\utils\synthesizer.py", line 181, in _load_tts
    self.tts_config = load_config(tts_config_path)
  File "C:\Users\707spacestation\AppData\Roaming\Python\Python38\site-packages\TTS\config\__init__.py", line 93, in load_config
    model_name = _process_model_name(config_dict)
  File "C:\Users\707spacestation\AppData\Roaming\Python\Python38\site-packages\TTS\config\__init__.py", line 61, in _process_model_name
    model_name = model_name.replace("_generator", "").replace("_discriminator", "")
AttributeError: 'dict' object has no attribute 'replace'

This is my command:

from subtoaudio import SubToAudio
sub = SubToAudio(model_path="G_100000.pth" , config_path="config.json")
subtitle = sub.subtitle("01.vi.srt")
sub.convert_to_audio(data=subtitle, tempo_mode="all", tempo_speed=1.3, lang="vie", output_path="01.vi.wav", save_temp=True,)

Thanks for your help!

UnicodeDecodeError

Edit: never mind. I found out the solution. I copied the config.json into the root directory of the virtual environment. Deleting this config.json solves it.

Hello, when I tried to use vie language subtitle, this error came up:

UnicodeDecodeError: 'cp932' codec can't decode byte 0x86 in position 253: illegal multibyte sequence


Here is my command

from subtoaudio import SubToAudio
sub = SubToAudio(model_path="G_100000.pth" , config_path="config.json")
subtitle = sub.subtitle("01.vi.srt")
sub.convert_to_audio(data=subtitle, tempo_mode="all", tempo_speed=1.3, lang="vie", output_path="01.vi.wav", save_temp=True,)




permission error?

Hello, I was using your script but this error keeps appearing

using fairseq model as default
English is default language

tts_models/eng/fairseq/vits is already downloaded.
Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log10
| > min_level_db:0
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:None
| > fft_size:1024
| > power:None
| > preemphasis:0.0
| > griffin_lim_iters:None
| > signal_norm:None
| > symmetric_norm:None
| > mel_fmin:0
| > mel_fmax:None
| > pitch_fmin:None
| > pitch_fmax:None
| > spec_gain:20.0
| > stft_pad_mode:reflect
| > max_norm:1.0
| > clip_norm:True
| > do_trim_silence:False
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:10
| > hop_length:256
| > win_length:1024
Traceback (most recent call last):
File "C:\Users\WBstore\Desktop\22.py", line 19, in
subtitle = sub.subtitle(temp_subtitle_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\WBstore\AppData\Local\Programs\Python\Python311\Lib\site-packages\subtoaudio\subtoaudio.py", line 40, in subtitle
dictionary = self._extract_data_srt(temp_filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\WBstore\AppData\Local\Programs\Python\Python311\Lib\site-packages\subtoaudio\subtoaudio.py", line 100, in _extract_data_srt
with open(file_path, 'r') as file:
^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: 'C:\Users\WBstore\AppData\Local\Temp\tmppumq995a.srt'

Doesn't work at Windows

File "F:\whisper\virtual\lib\site-packages\subtoaudio\subtoaudio.py", line 45, in subtitle
input_stream = ffmpeg.input(file_path)
AttributeError: module 'ffmpeg' has no attribute 'input'

Use Cuda Gpu

What code would I need to add to be able to use my gpu instead of cpu, struggling to find out
thanks

models coqui.ai

Hello, I can use the "tts_models/multilingual/multi-dataset/bark" model, is it part of coqui.ai?
Could you show me an example of how to use it, using your script? I couldn't use this Bark model

exe version?

Could u please provide an exe windows version of sub-to-audio in the release section?

Speed issues

Hello! First off, thank you for the fantastic update.

I've been experimenting with controlling the tempo using both the overflow and all modes. However, I've noticed that when the subtitle is lengthy and the time duration is quite brief, the audio and subtitle don't seem to sync up properly even when overflow mode is used. Do you have any suggestions or solutions to address this issue?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.