Comments (29)
I am currently working on a robust way to do this, and there are a few ways to do it, you can use pyannote-audio for diarization.
https://github.com/pyannote/pyannote-audio
Either you merge the outputs, assign words to speakers by looking up the diarization timestamps.
or you feed each diarization segment as input to whisper independently.
from whisperx.
@holynuts first attempt at including diarization in the recent commit d395c21
from whisperx.
@m-bain I made a minimal example of my idea using my fork of this repo, it saves the words+timestamps using pickle and modifies it to fit NeMo format.
it runs well locally if you face problems with tcmalloc using colab
https://colab.research.google.com/drive/1fhjGIr_S_vERE9_F1UL033g5shVD-q5K
from whisperx.
You could run this https://huggingface.co/spaces/dwarkesh/whisper-speaker-recognition (see app.py
under Files and versions
) and merge both outputs to get the timestamps.
from whisperx.
I'm very much looking forward to this addition. Thanks for all your work.
from whisperx.
@holynuts first attempt at including diarization in the recent commit d395c21
The README lists --diarization but the code uses --diarize
from whisperx.
from whisperx.
Hey folks!
First of all, thank you for the great work.
about the diarization, is there any example of the python implementation? I just see the cmd line example, and I would like to test it with python.
Thanks!
from whisperx.
Nvidia NeMo has a tutorial that does speaker diarization and ASR, basically it produces tokens with timestamps which are then grouped according to the RTTM timestamps, diarization timestamps are more accurate than pyannote but their ASR is awful compared to whisper, I'm currently experimenting how to merge alligned word-level transcriptions from whisperX with diarization from NeMo
from whisperx.
@MahmoudAshraf97 ah yes I saw this tutorial -- i didnt know their diarization is better! I will test it on my data -- i thought pyannote was the current best, thank you for letting me know.
from whisperx.
What do you think about using speech/source separation models to produce an audio track for each speaker. Then use whisperx to transcribe the result with precise timestamps. Finally put all the transcriptions together and sort them according to timestamp.
I understand that this approach would be an alternative to using pyannote or any other package to identify the speakers.
I have tried to implement this idea with some models from the espnet2 library without any satisfactory result. Do you think it is feasible?
As pointed out in here whisper generally transcribes only one speaker when there is overlap. I think this approach could be a solution to this problem.
from whisperx.
@Fcabla I am not sure if speech separation is needed unless you have a lot of overlapping speakers. I have good results so far using:
Run whisperX and diarization separately. For each word, look if its timestamp lies within a diarization segment, if so, assign speaker label to that word.
However this assumes the word timestamps are 100% accurate, which is not always the case due to the current whisperX assumption that whisper timestamps are correct +/- 2 seconds
from whisperx.
@m-bain I agree, for most applications it is not necessary. I am currently using a practically identical pipeline with quite good results. However, when there is some overlapping I have encountered the following two problems.
Whisper fails to transcribe several dialogues occurring simultaneously, for a given timestamp there is only one token. However, diarization models can identify multiple speakers for a given timestamp.
For example, imagine that the diarization model says that SPEAKER_00 speaks from second 5:00 to second 13:46 and SPEAKER_01 speaks from second 7:35 to second 8:02 -> there is overlap. To whom is assigned the token of the transcript that whisper generates and that starts at second 7:50.
It would be amazing if whisper were able to transcribe several tokens occurring at the same time, but currently it is not. Hence the idea of using speech separation as an alternative to diarization.
from whisperx.
@Fcabla I see yes overlapping speech is a difficult problem, probably worth using speech separation only for overlapping segments
from whisperx.
Thank you and I am existed with it, I did a quick run and having the error "ModuleNotFoundError: No module named 'whisperx.diarize'" occurred, will do a few more test later on this week. thank you again.
ran another one, facing errors as below, I did use not english file, it might be the reason, will try again .
from whisperx.
Failed to align segment: no characters in this segment found in model dictionary, resorting to original...
Performing diarization...
Traceback (most recent call last):
File "/usr/local/bin/whisperx", line 8, in
sys.exit(cli())
File "/usr/local/lib/python3.8/dist-packages/whisperx/transcribe.py", line 469, in cli
diarize_segments = diarize_pipeline(audio_path, min_speakers=min_speakers, max_speakers=max_speakers)
TypeError: 'NoneType' object is not callable
from whisperx.
I've tested with Russian audio and everything worked extremely well, apart for Diarization. Even when exact number of speakers is provided, the text is often attributed to the wrong speaker. Let me know if I can support work on Diarization.
from whisperx.
Also, how does one add the Hugging Face token in the python implementation?
Cheers
from whisperx.
Hey folks! First of all, thank you for the great work. about the diarization, is there any example of the python implementation? I just see the cmd line example, and I would like to test it with python.
Thanks!
Hey ! you can refer transcribe.py file in whisperx git repo if you want to create your own python script for diarization.
from whisperx.
Also, how does one add the Hugging Face token in the python implementation?
Cheers
you can add it in this way:
diarize_pipeline = Pipeline.from_pretrained("pyannote/[email protected]",
use_auth_token=<your_hf_token>)
hf token can be obtained from : https://huggingface.co/settings/tokens
from whisperx.
i tried the diarization along with VAD-filter (which gives better results). Thanks @m-bain for adding this.
Adding the generated .ass here.
One issue i can see is that although there were 2 speakers in the audio file. model identified 4 speakers. The audio is downloaded from https://huggingface.co/spaces/ml6team/Speaker-Diarization page
hf copy.wav.ass.txt
from whisperx.
@m-bain I made a minimal example of my idea using my fork of this repo, it saves the words+timestamps using pickle and modifies it to fit NeMo format. it runs well locally if you face problems with tcmalloc using colab
https://colab.research.google.com/drive/1lb-mh2TP4iPb5AHSSYZXySxVcyJw4LZA
Thank you for your script!
I've had trouble running NeMo locally, and really want to investigate NeMo as an alternative to Pyannote. So I wanted to try your Collab—but also couldn't run this on Colab Pro. NeMo successfully installs but it chokes when it tries to import the method, can't find it. Have you seen any similar issues? Is this notebook still functioning? Thanks!!
from whisperx.
@m-bain I made a minimal example of my idea using my fork of this repo, it saves the words+timestamps using pickle and modifies it to fit NeMo format. it runs well locally if you face problems with tcmalloc using colab
https://colab.research.google.com/drive/1lb-mh2TP4iPb5AHSSYZXySxVcyJw4LZAThank you for your script!
I've had trouble running NeMo locally, and really want to investigate NeMo as an alternative to Pyannote. So I wanted to try your Collab—but also couldn't run this on Colab Pro. NeMo successfully installs but it chokes when it tries to import the method, can't find it. Have you seen any similar issues? Is this notebook still functioning? Thanks!!
I have exactly this problem, the error when I try to import. Could someone fix it? Thanks
from whisperx.
@m-bain I made a minimal example of my idea using my fork of this repo, it saves the words+timestamps using pickle and modifies it to fit NeMo format. it runs well locally if you face problems with tcmalloc using colab
https://colab.research.google.com/drive/1lb-mh2TP4iPb5AHSSYZXySxVcyJw4LZAThank you for your script!
I've had trouble running NeMo locally, and really want to investigate NeMo as an alternative to Pyannote. So I wanted to try your Collab—but also couldn't run this on Colab Pro. NeMo successfully installs but it chokes when it tries to import the method, can't find it. Have you seen any similar issues? Is this notebook still functioning? Thanks!!I have exactly this problem, the error when I try to import. Could someone fix it? Thanks
@ubanning Try this updated notebook, it's working as expected
https://colab.research.google.com/drive/1fhjGIr_S_vERE9_F1UL033g5shVD-q5K
from whisperx.
@m-bain I made a minimal example of my idea using my fork of this repo, it saves the words+timestamps using pickle and modifies it to fit NeMo format. it runs well locally if you face problems with tcmalloc using colab
https://colab.research.google.com/drive/1lb-mh2TP4iPb5AHSSYZXySxVcyJw4LZAThank you for your script!
I've had trouble running NeMo locally, and really want to investigate NeMo as an alternative to Pyannote. So I wanted to try your Collab—but also couldn't run this on Colab Pro. NeMo successfully installs but it chokes when it tries to import the method, can't find it. Have you seen any similar issues? Is this notebook still functioning? Thanks!!I have exactly this problem, the error when I try to import. Could someone fix it? Thanks
@ubanning Try this updated notebook, it's working as expected https://colab.research.google.com/drive/1fhjGIr_S_vERE9_F1UL033g5shVD-q5K
@MahmoudAshraf97 Hello, thanks.
I was the one who opened a discussion in your repository, about limiting the size of subtitle lines and the problem when there are more than 2 speakers. MahmoudAshraf97/whisper-diarization#12
🙂
from whisperx.
@m-bain I made a minimal example of my idea using my fork of this repo, it saves the words+timestamps using pickle and modifies it to fit NeMo format. it runs well locally if you face problems with tcmalloc using colab
https://colab.research.google.com/drive/1lb-mh2TP4iPb5AHSSYZXySxVcyJw4LZAThank you for your script!
I've had trouble running NeMo locally, and really want to investigate NeMo as an alternative to Pyannote. So I wanted to try your Collab—but also couldn't run this on Colab Pro. NeMo successfully installs but it chokes when it tries to import the method, can't find it. Have you seen any similar issues? Is this notebook still functioning? Thanks!!I have exactly this problem, the error when I try to import. Could someone fix it? Thanks
@ubanning Try this updated notebook, it's working as expected https://colab.research.google.com/drive/1fhjGIr_S_vERE9_F1UL033g5shVD-q5K
Fails at #Reading timestamps <> Speaker Labels mapping
NameError
Traceback (most recent
call last)
in
12
speaker_ts.append([s, e, int (line list[11].split(" ")[-11)1)
13
-> 14 wsm
= get words speaker mapping(result aligned[ "word segments"], speaker ts,
"start")
NameFrror: name
'result aligned' is not defined
from whisperx.
@m-bain I made a minimal example of my idea using my fork of this repo, it saves the words+timestamps using pickle and modifies it to fit NeMo format. it runs well locally if you face problems with tcmalloc using colab
https://colab.research.google.com/drive/1lb-mh2TP4iPb5AHSSYZXySxVcyJw4LZAThank you for your script!
I've had trouble running NeMo locally, and really want to investigate NeMo as an alternative to Pyannote. So I wanted to try your Collab—but also couldn't run this on Colab Pro. NeMo successfully installs but it chokes when it tries to import the method, can't find it. Have you seen any similar issues? Is this notebook still functioning? Thanks!!I have exactly this problem, the error when I try to import. Could someone fix it? Thanks
@ubanning Try this updated notebook, it's working as expected https://colab.research.google.com/drive/1fhjGIr_S_vERE9_F1UL033g5shVD-q5K
Fails at #Reading timestamps <> Speaker Labels mapping
NameError Traceback (most recent call last) in 12 speaker_ts.append([s, e, int (line list[11].split(" ")[-11)1) 13 -> 14 wsm = get words speaker mapping(result aligned[ "word segments"], speaker ts, "start") NameFrror: name 'result aligned' is not defined
result_aligned
is defined in the whisperx cell, make sure that it ran successfully
from whisperx.
Thanks, it was loaded so not sure what was going on, but glitch now resolved. However, no matter what I throw at, even files very easily handled by Pyannote, it always shows one speaker. Can't believe Nemo is this bad, something else has to be going on.
from whisperx.
You might find this better than pyannote on your data:
https://github.com/JaesungHuh/SimpleDiarization
But depends, and ought to be constrained to whisperx sentences, i.e. Appendix Sec. A (page 13) of https://www.robots.ox.ac.uk/~vgg/publications/2023/Han23/han23.pdf
from whisperx.
Related Issues (20)
- Execute function diarize_model() threw an exception
- Streaming with whisperx HOT 1
- Crashed by ZeroDivisionError when set "--print_progress True"
- the text was cut before reaching the end time.
- How to use local wav2vec alignment model?
- whisperx error after upgrade HOT 2
- what does the batch size mean HOT 4
- DiarizationPipeline without HF token?
- audio split into 30-sec chunks. What does it mean? HOT 1
- Model evaluation and confidence metrics
- "missing 3 required positional arguments" HOT 5
- How to load a local model by default instead of going to huggingface to find a model first?
- Pyannote model download failing HOT 1
- Pyannote 3.0
- Faster options when running on Mac mini M2 with Ventura 13.5.2 OS?
- When calling ffmpeg, filenames with spaces are truncated
- Error When Running Diarization: KeyError and ValueError Related to DataFrame
- Huggingface Authentication Issues probably related to pyannote HOT 7
- pyannote/speaker-diarization-3.0 runs slower than pyannote/[email protected] HOT 8
- importing whisperx - no module named 'lightning_fabric.utilities'
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from whisperx.