Comments (8)
This looks more or less correct! The benchmarks we ran were from a bunch of YouTube videos (I can give you the URLs), and transcription time is somewhat dependent on audio file. This slower transcription time could be because Whisper is getting caught in a hallucination in one of the batches, causing it to generate till it hits max length (448 tokens).
You could check whether the text has repetitions, or try instantiating the pipeline with a lower max length (we set it to 128 and got complete transcriptions):
# instantiate pipeline in float16
pipeline = FlaxWhisperPipline("openai/whisper-large-v2", dtype=jnp.float16, batch_size=32, max_length=128)
from whisper-jax.
Correct!
from whisper-jax.
reproduced the hallucination with this audio file on Huggingface
it's impressively fast
but 16G memory seems not enough for statement jax = FlaxWhisperPipline("openai/whisper-large-v2", dtype=jnp.bfloat16, batch_size=16)
, how much memory does it require to instantiate the pipeline? based on a very rough observation, the GPU(Tesla T4, 14G) memory was filled instantly, then memory grows slowly until it hits 16G, then OOM killed
just followed discussions in #7 and the transformer issue, seems we haven't found the cause yet
from whisper-jax.
Also worth making sure your audio is already at 16kHz so that we don't resample in the Flax Whisper pipeline (which can be lengthy for long audio files)
from whisper-jax.
The absolute transcription time is somewhat dependent on audio sample - since it's proportional to number of tokens generated, it'll depend on speaking rate, propensity to hallucinate, speech:silence ratio, etc. Since we what we really care about is the relative time between systems (rather than necessarily the absolute ones), it would be cool to benchmark with the same audio file using OpenAI's Whisper and Transformer's Whisper on GPU to see what we're aiming for
from whisper-jax.
One more question to do some fair comparisons across libraries. If I am reading the codebase correctly, this is doing a greedy search (e.g. beam_size=1). Is that correct?
from whisper-jax.
Thanks for all your help.
Finally, it might be good to just have the audio you used to benchmark. @sanchit-gandhi can you direct me to the youtube video?
from whisper-jax.
Hi,
On CPU only system (no TPU/GPU), the following deteriorates the overall performance. For a <10 min audio, it consumes almost 25% more time.
SAMPLING_RATE = 16000
audio, sr = librosa.load('test_audio.mp3', sr=SAMPLING_RATE)
I guess there are quite a few parameters to be tuned to achieve good/best performance, and improper tuning can worsen the situation 🤔
from whisper-jax.
Related Issues (20)
- Is there any way to reduce the first jit compile time HOT 1
- [Feature Request] Youtube Compatible Transcript HOT 4
- Whisper JAX is not faster than Whisper in colab GPU environment. HOT 3
- Out of vram and reboot HOT 2
- Cannot instantiate FlaxWhisperPipline with parameters anymore HOT 4
- colab, kaggle notebook has a library dependency issue HOT 1
- New library possibly faster than Jax or just a hoax?
- Adaptation for Whisper-Large-V3 model HOT 2
- please create true comparisons with other whisper implementations HOT 12
- cannot import name 'FlaxWhisperPipline' from partially initialized module 'whisper_jax' (most likely due to a circular import) .. HOT 1
- How to finetune whisper on kaggle TPU? HOT 3
- ERROR: Could not find a version that satisfies the requirement jaxlib==0.4.5 (from versions: 0.4.18, 0.4.19, 0.4.20, 0.4.21) HOT 1
- there is a requirements.txt file of whisper-jax? HOT 2
- Using mulaw audio buffer data
- The demo throws error when uploading file
- Is there some code for Whisper jax to produce srt subtitle? HOT 1
- How to add millisecond for the timestamp?
- I have downloaded the flax_model, where can I call it?
- why whisper-jax did not use my GPU? HOT 3
- Rust impl
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from whisper-jax.