GithubHelp home page GithubHelp logo

Comments (16)

Purfview avatar Purfview commented on May 16, 2024
  1. I dunno, you tell me if GPU processing works on Windows 7.
  2. You can download cuBLAS nad cuDNN libs from there: https://github.com/Purfview/whisper-standalone-win/releases/tag/libs

Place libs in the same folder where Faster-Whisper executable is.

from whisper-standalone-win.

MMasutin avatar MMasutin commented on May 16, 2024

Sorry for reopening this. Not sure if I need a new issue. Can anyone help solve this?

Faster-Whisper r125 running on: CPU
"D:\whisper-fast\__main__.py", line 445, in <module>
"D:\whisper-fast\__main__.py", line 355, in cli
"faster_whisper\transcribe.py", line 123, in __init__
RuntimeError: mkl_malloc: failed to allocate memory
[1164] Failed to execute script '__main__' due to unhandled exception!

The files from cuBLAS.and.cuDNN.7z are where Whisper is. In NVIDIA Control Panel, only 3D Settings are available, where High-performance NVIDIA processor is applied to all. Among the settings is CUDA GPUs - All. From NVIDIA System Information:
CUDA Cores: 48
Core clock: 475 MHz
Shader clock: 950 MHz
Memory data rate: 1334 MHz
Memory interface: 64-bit
Memory bandwidth: 10.67 GB/s
Total available graphics memory: 2533 MB
Dedicated video memory: 1024 MB DDR3
System video memory: 0 MB
Shared system memory: 1509 MB
NVCUDA.DLL v7.5.15

from whisper-standalone-win.

Purfview avatar Purfview commented on May 16, 2024

Dedicated video memory: 1024 MB DDR3

This looks very low.
What is you GPU, CPU and RAM?

Try model=tiny.

from whisper-standalone-win.

MMasutin avatar MMasutin commented on May 16, 2024

CPU processing worked with the Tiny model on 4/1GB RAM/VRAM. But GPU processing still won't work even on 8/2GB in Windows 7. I wonder if the DLLs in cuBLAS.and.cuDNN.7z are not for 7.
2) Off this topic a bit, I need time-coding more than transcription. In the SRT file, all the start times follow the end times (these are correct) immediately (e.g. End time 1 = Start time 2), resulting in impossibly long show times without interruptions. Is this due to the model used?

from whisper-standalone-win.

Purfview avatar Purfview commented on May 16, 2024
  1. What model is your GPU.
  2. What version of Standalone Faster-Whisper you are using?
  3. Post a command line parameters you are using.

from whisper-standalone-win.

MMasutin avatar MMasutin commented on May 16, 2024

My GPU is GF610M, 1GB. I can get details from a GPU reporting utility. I'm studying [https://docs.nvidia.com/deploy/pdf/CUDA_Compatibility.pdf](compatibility issues). My omission. A search for 'Whisper CUDA' found that you need to add '--device cuda' to enable it. Now I get Faster-Whisper r125 running on: CUDA, but errors too. The same as with 'RuntimeError: mkl_malloc: failed to allocate memory' when using the model medium with CPU processing:

"D:\whisper-fast\__main__.py", line 445, in <module>
"D:\whisper-fast\__main__.py", line 355, in cli
"faster_whisper\transcribe.py", line 123, in __init__
RuntimeError: CUDA failed with error initialization error
[3032] Failed to execute script '__main__' due to unhandled exception!
Errors when using --help:
File "D:\whisper-fast\__main__.py", line 445, in <module>
File "D:\whisper-fast\__main__.py", line 277, in cli
File "argparse.py", line 1768, in parse_args
File "argparse.py", line 1800, in parse_known_args
File "argparse.py", line 2006, in _parse_known_args
File "argparse.py", line 1946, in consume_optional
File "argparse.py", line 1874, in take_action
File "argparse.py", line 1044, in __call__
File "argparse.py", line 2494, in print_help
File "argparse.py", line 2500, in _print_message
File "encodings\cp1251.py", line 19, in encode
UnicodeEncodeError: 'charmap' codec can't encode character '\xbf' in position 88
21: character maps to <undefined>

from whisper-standalone-win.

MMasutin avatar MMasutin commented on May 16, 2024

Correction: Compatibility issues

from whisper-standalone-win.

Purfview avatar Purfview commented on May 16, 2024

You need newer GPU.
For better timestamps get latest r128 version.

from whisper-standalone-win.

MMasutin avatar MMasutin commented on May 16, 2024

I get similar errors with Faster-Whisper r134+++ when trying to run on CUDA. Is GTX960M, 2GB VRAM, still not enough?

from whisper-standalone-win.

Purfview avatar Purfview commented on May 16, 2024

Run what? What errors?

Maybe problem is with mobile GPU or its drivers, or whatever.

from whisper-standalone-win.

MMasutin avatar MMasutin commented on May 16, 2024

Errors (as above) with -h (help not really necessary, just testing) and this

Whisper-Faster_r134 %Audio% --language en --output_format srt --task transcribe --model tiny --device cuda

Is GTX960M, 2GB VRAM, not enough for CUDA?

from whisper-standalone-win.

Purfview avatar Purfview commented on May 16, 2024

To solve problem with --help you may need to change language to English in Console Panel>Region and Language.

2GB is enough for tiny model.

In laptop you maybe need to use the second GPU device, try: --device cuda:1.

from whisper-standalone-win.

MMasutin avatar MMasutin commented on May 16, 2024

'cuda:1' didn't help. Thank you! I'll try to find the cause and share it if there is one.
I suspected the reason for errors with -h was the locale because of 'cp1251.py … UnicodeEncodeError'.

from whisper-standalone-win.

MMasutin avatar MMasutin commented on May 16, 2024

The title of this issue should be 'How to make it run on CUDA'.
I have so far failed to do it on three architectures of Nvidia GPUs (oldest to newest): Fermi, Maxwell, Hopper (probably Hopper as I didn't check the model, but it's built in late 2022; RuntimeError: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version.) The subject matter is complex to the uninitiated. I refer anyone to this table in hopes of finding suitable drivers or anything else sooner.

from whisper-standalone-win.

Purfview avatar Purfview commented on May 16, 2024

The title of this issue should be 'How to make it run on CUDA'.

It should be as the original issue. You shouldn't post the different issues here.

from whisper-standalone-win.

MMasutin avatar MMasutin commented on May 16, 2024

I meant a new title for this issue.

from whisper-standalone-win.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.