GithubHelp home page GithubHelp logo

pszemraj / vid2cleantxt Goto Github PK

View Code? Open in Web Editor NEW
182.0 3.0 24.0 740.4 MB

Python API & command-line tool to easily transcribe speech-based video files into clean text

License: Apache License 2.0

Python 6.73% Jupyter Notebook 93.27%
nlp audio audio-processing transcription speech-to-text speech-recognition speech spelling-correction keyword-extraction keyword

vid2cleantxt's Introduction

vid2cleantxt

vid2cleantext simple

Jump to Quickstart

vid2cleantxt: a transformers-based pipeline for turning heavily speech-based video files into clean, readable text from the audio. Robust speech transcription is now possible like never before with OpenAI's whisper model.

TL;DR check out this Colab notebook for a transcription and keyword extraction of a speech by John F. Kennedy by simply running all cells.


Table of Contents


Motivation

Video, specifically audio, is inefficient in conveying dense or technical information. The viewer has to sit through the whole thing, while only part of the video may be relevant to them. If you don't understand a statement or concept, you must search through the video or re-watch it. This project attempts to help solve that problem by converting long video files into text that can be easily searched and summarized.

Overview

Example Output

Example output text of a video transcription of JFK's speech on going to the moon:

President.Kennedy.s.1962.Speech.on.the.US.Space.Program.C-SPAN.Classroom.mp4

vid2cleantxt output:

Now look into space to the moon and to the planets beyond and we have vowed that we shall not see it governed by a hostile flag of conquest but by a banner of freedom and peace we have vowed that we shall not see space filled with weapons of mass destruction but with instruments of knowledge and understanding yet the vow. In short our leadership in science and industry our hopes for peace and security our obligations to ourselves as well as others all require a. To solve these mysteries to solve them for the good of all men and to become the worlds leading space faring nation we set sail on this new sea because there is new knowledge to be gained and new rights to be won and they must be won and used for the progress of all people for space science like nuclear science and all technology. Has no conscience of its own whether it will become a force for good or ill depends on man and only if the united states occupies a position of preeminence can we help decide whether this new ocean will be a sea of peace

model = openai/whisper-medium.en

See the demo notebook for the full-text output.

Pipeline Intro

vid2cleantxt detailed

  1. The transcribe.py script uses audio2text_functions.py to convert video files to .wav format audio chunks of duration X* seconds
  2. transcribe all X audio chunks through a pretrained transformer model
  3. Write all list results into a text file, store various runtime metrics into a separate text list, and delete the .wav audio chunk directory after using them.
  4. (Optional) create two new text files: one with all transcriptions appended and one with all metadata appended.
  5. FOR each transcription text file:
    • Passes the 'base' transcription text through a spell checker (Neuspell) and auto-correct spelling. Saves as a new text file.
    • Uses pySBD to infer sentence boundaries on the spell-corrected text and add periods to delineate sentences. Saves as a new file.
    • Runs essential keyword extraction (via YAKE) on spell-corrected file. All keywords per file are stored in one data frame for comparison and exported to the .xlsx format

** (where X is some duration that does not overload your computer/runtime)

Given INPUT_DIRECTORY:

  • final transcriptions in.txt will be in INPUT_DIRECTORY/v2clntxt_transcriptions/results_SC_pipeline/
  • metadata about transcription process will be in INPUT_DIRECTORY/v2clntxt_transc_metadata

Quickstart

Install, then you can use vid2cleantxt in two ways:

  1. CLI via transcribe.py script from the command line (python vid2cleantxt/transcribe.py --input-dir "path/to/video/files" --output-dir "path/to/output/dir"\)
  2. As a python package, import vid2cleantxt and use the transcribe module to transcribe videos (vid2cleantxt.transcribe.transcribe_dir())

Don't want to use it locally or don't have a GPU? you may be interested in the demo notebook on Google Colab.

Installation

As a Python package

  • (recommended) Create a new virtual environment with python3 -m venv venv
    • Activate the virtual environment with source venv/bin/activate
  • Install the repo with pip:
pip install git+https://github.com/pszemraj/vid2cleantxt.git

The library is now installed and ready to use in your Python scripts.

import vid2cleantxt

text_output_dir, metadata_output_dir = vid2cleantxt.transcribe.transcribe_dir(
    input_dir="path/to/video/files",
    model_id="openai/whisper-base.en",
    chunk_length=30,
)

# do things with text files in text_output_dir

See below for more details on the transcribe_dir function.

Install from source

  1. git clone https://github.com/pszemraj/vid2cleantxt.git
    • use the --depth=1 switch to clone only the latest master (faster)
  2. cd vid2cleantxt/
  3. pip install -e .

As a shell block:

git clone https://github.com/pszemraj/vid2cleantxt.git --depth=1
cd vid2cleantxt/
pip install -e .

install details & gotchas

  • This should be automatically completed upon installation/import, but a spacy model may need to be downloaded for post-processing transcribed audio. This can be completed with spacy download en_core_web_sm
  • FFMPEG is required as a base system dependency to do anything with video/audio. This should be already installed on your system; otherwise see the FFmpeg site.
  • We've added an implementation for whisper to the repo. Until further tests are completed, it's recommended to stick with the default 30s chunk length for these models. (plus, they are fairly compute-efficient for the resulting quality)

example usage

CLI example: transcribe a directory of example videos in ./examples/ with the whisper-small model (not trained purely english) and print the transcriptions with the cat command:

python examples/TEST_folder_edition/dl_src_videos.py
python vid2cleantxt/transcribe.py -i ./examples/TEST_folder_edition/ -m openai/whisper-small
find ./examples/TEST_folder_edition/v2clntxt_transcriptions/results_SC_pipeline -name "*.txt" -exec cat {} +

Run python vid2cleantxt/transcribe.py --help for more details on the CLI.

Python API example: transcribe an input directory of user-specified videos using whisper-tiny.en, a smaller but faster model than the default.

import vid2cleantxt

_my_input_dir = "path/to/video/files"
text_output_dir, metadata_output_dir = vid2cleantxt.transcribe.transcribe_dir(
    input_dir=_my_input_dir,
    model_id="openai/whisper-tiny.en",
    chunk_length=30,
)

Transcribed files can then be interacted with for whatever purpose (see Visualization and Analysis and below for ideas).

from pathlib import Path

v2ct_output_dir = Path(text_output_dir)
transcriptions = [f for f in v2ct_output_dir.iterdir() if f.suffix == ".txt"]

# read in the first transcription
with open(transcriptions[0], "r") as f:
    first_transcription = f.read()
print(
    f"The first 1000 characters of the first transcription are:\n{first_transcription[:1000]}"
)

See the docstrings of transcribe_dir() for more details on the arguments. One way you can do this is with inspect:

import inspect
import vid2cleantxt

print(inspect.getdoc(vid2cleantxt.transcribe.transcribe_dir))

Notebooks on Colab

Notebook versions are available on Google Colab as they offer accessible GPUs which makes vid2cleantxt much faster.

As vid2cleantxt is now available as a package with python API, there is no longer a need for long, complicated notebooks. See this notebook for a relatively simple example - copy it to your drive and adjust as needed.

⚠️ The notebooks in ./colab_notebooks are now deprecated and not recommended to be used. ⚠️ TODO: remove in a future PR.

Resources for those new to Colab

If you like the benefits Colab/cloud notebooks offer but haven't used them before, it's recommended to read the Colab Quickstart, and some of the below resources as things like file I/O are different than your PC.


Details & Application

How long does this take to run?

On Google Colab with a 16 GB GPU (available to free Colab accounts): approximately 8 minutes to transcribe ~90 minutes of audio. CUDA is supported - if you have an NVIDIA graphics card, you may see runtimes closer to that estimate on your local machine.

On my machine (CPU only due to Windows + AMD GPU), it takes approximately 30-70% of the total duration of input video files. You can also look at the "console printout" text files in example_JFK_speech/TEST_singlefile.

  • with model = facebook/wav2vec2-base-960h approx 30% of original video RT
  • with model = facebook/hubert-xlarge-ls960-ft (_perhaps the best pre-whisper model anecdotally) approx 70-80% of original video RT
  • timing the whisper models is a TODO, but current estimate would be in between the above two models for openai/whisper-base.en on CPU.

Specs:

Processor Intel(R) Core(TM) i7-8665U CPU @ 1.90GHz
Speed 4.8 GHz
Number of Cores 8
Memory RAM 32 GB
Video Card #1 Intel(R) UHD Graphics 620
Dedicated Memory 128 MB
Total Memory 16 GB
Video Card #2 AMD Radeon Pro WX3200 Graphics
Dedicated Memory 4.0 GB
Total Memory 20 GB
Operating System  Windows 10 64-bit

NOTE: that the default model is openai/whisper-base.en. See the model card for details.

Now I have a bunch of long text files. How are these useful?

short answer: noam_chomsky.jpeg

More comprehensive answer:

With natural language processing and machine learning algorithms, text data can be visualized, summarized, or reduced in many ways. For example, you can use TextHero or ScatterText to compare audio transcriptions with written documents or use topic models or statistical models to extract key topics from each file. Comparing text data can help you understand how similar they are or identify vital differences.

Visualization and Analysis

  1. TextHero - cleans text, allows for visualization / clustering (k-means) / dimensionality reduction (PCA, TSNE)
    • Use case here: I want to see how this speaker's speeches differ from each other. Which are "the most related"?
  2. Scattertext - allows for comparisons of one corpus of text to another via various methods and visualizes them.
    • Use case here: I want to see how the speeches by this speaker compare to speeches by speaker B in terms of topics, word frequency… so on

Some examples from my usage are illustrated below from both packages.

Text Extraction / Manipulation

  1. Textract
  2. Textacy
  3. YAKE
    • A brief YAKE analysis is completed in this pipeline after transcribing the audio.

Text Summarization

Several options are available on the HuggingFace website. To create a better, more general model for summarization, I have fine-tuned this model on a book summary dataset which I find provides the best results for "lecture-esque" video conversion. I wrote a little about this and compared it to other models WARNING: satire/sarcasm inside here.

I use several similar methods in combination with the transcription script. However, it isn't in a place to be officially posted yet. It will be posted to a public repo on this account when ready. You can now check out this Colab notebook using the same example text that is output when the JFK speeches are transcribed.

TextHero example use case

Clustering vectorized text files into k-means groups:

iml Plotting with TSNE + USE, Colored on Directory Name

iml Plotting with TSNE + USE, Colored on K-Means Cluster

ScatterText example use case

Comparing the frequency of terms in one body of text vs. another

ST P 1 term frequency I ML 2021 Docs I ML Prior Exams_072122_


Design Choices & Troubleshooting

What python package dependencies does this repo have?

Upon cloning the repo, run the command pip install -e . (orpip install -r requirements.txt works too) in a terminal opened in the project directory. Requirements (upd. Oct 10, 2022) are:

clean-text
GPUtil
humanize
joblib
librosa
moviepy~=1.0.3
natsort>=7.1.1
neuspell>=1.0.0
numpy
packaging
pandas>=1.3.0
psutil>=5.9.2
pydub>=0.24.1
pysbd>=0.3.4
requests
setuptools>=58.1.0
spacy>=3.0.0,<4.0.0
symspellpy~=6.7.0
torch>=1.8.2
tqdm
transformers>=4.23.0
wordninja==2.0.0
wrapt
yake>=0.4.8

If you encounter warnings/errors that mention FFmpeg, please download the latest version of FFMPEG from their website here and ensure it is added to PATH.

My computer crashes once it starts running the wav2vec2 model

First, try a smaller model: pass -m openai/whisper-tiny.en in CLI or model_id="openai/whisper-tiny.en" in python.

If that doesn't help, reducing the chunk_length duration can reduce computational intensity but is less accurate use --chunk-len <INT> when calling vid2cleantxt/transcribe.py or chunk_length=INT in python.

The transcription is not perfect, and therefore I am mad

Perfect transcripts are not always possible, especially when the audio is not clean. For example, audio recorded with a microphone that is not always perfectly tuned to the speaker can cause the model to have issues. Additionally, the default models are not trained on specific speakers, and therefore the model will not be able to recognize the speaker / their accent.

Despite the small number of errors, the model can still recognize the speaker and their accent and capture a vast majority of the text. This should still save you a lot of time and effort.

How can I improve the performance of the model from a word-error-rate perspective?

As of Oct 2022: there's really shouldn't be much to complain about given what we had before whisper. That said, there may be some butgs or issues with the new model. Please report them in the issues section :)

The neural ASR model that transcribes the audio is typically the most crucial element to choose/tune. You can use any whisper, wav2vec2, or wavLM model from the huggingface hub; pass the model ID string with --model in CLI and model_id="my-cool-model" in python.

. Note: It's recommended to experiment with the different variants of whisper first, as thhey are the most performant for the vast majority of "long speech" transcription use cases.

You can also train your own model, but that requires you to have a transcription of that person's speech. As you may find, manual transcription is a bit of a pain; therefore, transcripts are rarely provided - hence this repo. If interested see this notebook

Why use transformer models instead of SpeechRecognition or other transcription methods?

Google's SpeechRecognition (with the free API) requires optimization of three unknown parameters*, which in my experience, can vary widely among English as a second language speakers. With wav2vec2, the base model is pretrained, so a 'decent transcription' can be made without spending a lot of time testing and optimizing parameters.

Also, because it's an API, you can't train it even if you wanted to, you have to be online for most of the script runtime functionally, and then, of course you have privacy concerns with sending data out of your machine.

* these statements reflect the assessment completed around project inception in early 2021.

Errors

  • _pickle.UnpicklingError: invalid load key, '<' --> Neuspell model was not downloaded correctly. Try re-downloading it.
  • manually open /Users/yourusername/.local/share/virtualenvs/vid2cleantxt-vMRD7uCV/lib/python3.8/site-packages/neuspell/../data
  • download the model from https://github.com/neuspell/neuspell#Download-Checkpoints
  • import neuspell
  • neuspell.seq_modeling.downloads.download_pretrained_model("scrnnelmo-probwordnoise")

Examples

  • two examples are available in the examples/ directory. One example is a single video (another speech), and the other is multiple videos (MIT OpenCourseWare). Citations are in the respective folders.
  • Note that the videos first need to be downloaded video the respective scripts in each folder first, i.e., run: python examples/TEST_singlefile/dl_src_video.py

Future Work, Collaboration, & Citations

Project Updates

A rough timeline of what has been going on in the repo:

  • Oct 2022 Part 2 - Initial integration of whisper model!
  • Oct 2022 - Redesign as Python package instead of an assortment of python scripts/notebooks that share a repository and do similar things.
  • Feb 2022 - Add backup functions for spell correction in case of NeuSpell failure (which, is a known issue at the time of writing).
  • Jan 2022 - add huBERT support, abstract the boilerplate out of Colab Notebooks. Starting work on the PDF generation w/ results.
  • Dec 2021 - greatly improved script runtime, and added more features (command line, docstring, etc.)
  • Sept-Oct 2021: Fixing bugs, and formatting code.
  • July 12, 2021 - sync work from Colab notebooks: add CUDA support for PyTorch in the .py versions, added Neuspell as a spell checker. General organization and formatting improvements.
  • July 8, 2021 - python scripts cleaned and updated.
  • April - June: Work done mostly on Colab, improving saving, grammar correction, etc.
  • March 2021: public repository added

Future Work

Note: these are largely not in order of priority.

  1. add OpenAI's whisper through integration with the transformers lib.

  2. Unfortunately, trying to use the Neuspell package is still not possible as the default package etc, has still not been fixed. I will add a permanent workaround to load/use with vid2cleantxt.

  3. syncing improvements currently in the existing Google Colab notebooks (links) above, such as NeuSpell

    • this will include support for CUDA automatically when running the code (currently just on Colab)
  4. clean up the code, add more features, and make it more robust.

  5. add a script to convert .txt files to a clean PDF report, example here

  6. add summarization script/module

  7. further expand the functionality of the vid2cleantxt module

  8. Add support for transcribing the other languages in the whisper model (e.g., French, German, Spanish, etc.). This will require synchronized API changes to ensure that English spell correction is only applied to English transcripts, etc.

I've found x repo / script / concept that I think you should incorporate or collaborate with the author

Could you send me a message / start a discussion? Always looking to improve. Or create an issue that works too.

Citations

whisper (OpenAI)

@report{,
   abstract = {We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zero-shot transfer setting without the need for any fine-tuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.},
   author = {Alec Radford and Jong Wook Kim and Tao Xu and Greg Brockman and Christine Mcleavey and Ilya Sutskever},
   title = {Robust Speech Recognition via Large-Scale Weak Supervision},
   url = {https://github.com/openai/},
}

wav2vec2 (fairseq)

Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019.

HuBERT (fairseq)

@article{Hsu2021,
   author = {Wei Ning Hsu and Benjamin Bolte and Yao Hung Hubert Tsai and Kushal Lakhotia and Ruslan Salakhutdinov and Abdelrahman Mohamed},
   doi = {10.1109/TASLP.2021.3122291},
   issn = {23299304},
   journal = {IEEE/ACM Transactions on Audio Speech and Language Processing},
   keywords = {BERT,Self-supervised learning},
   month = {6},
   pages = {3451-3460},
   publisher = {Institute of Electrical and Electronics Engineers Inc.},
   title = {HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units},
   volume = {29},
   url = {<https://arxiv.org/abs/2106.07447v1>},
   year = {2021},
}

MoviePy

  • link to repo as no citation info given

symspellpy / symspell

YAKE (yet another keyword extractor)

  • repo link
  • relevant citations:

    In-depth journal paper at Information Sciences Journal

    Campos, R., Mangaravite, V., Pasquali, A., Jatowt, A., Jorge, A., Nunes, C. and Jatowt, A. (2020). YAKE! Keyword Extraction from Single Documents using Multiple Local Features. In Information Sciences Journal. Elsevier, Vol 509, pp 257-289. pdf

    ECIR'18 Best Short Paper

    Campos R., Mangaravite V., Pasquali A., Jorge A.M., Nunes C., and Jatowt A. (2018). A Text Feature Based Automatic Keyword Extraction Method for Single Documents. In: Pasi G., Piwowarski B., Azzopardi L., Hanbury A. (eds). Advances in Information Retrieval. ECIR 2018 (Grenoble, France. March 26 – 29). Lecture Notes in Computer Science, vol 10772, pp. 684 - 691. pdf

    Campos R., Mangaravite V., Pasquali A., Jorge A.M., Nunes C., and Jatowt A. (2018). YAKE! Collection-independent Automatic Keyword Extractor. In: Pasi G., Piwowarski B., Azzopardi L., Hanbury A. (eds). Advances in Information Retrieval. ECIR 2018 (Grenoble, France. March 26 – 29). Lecture Notes in Computer Science, vol 10772, pp. 806 - 810. pdf

Video Citations

  • President Kennedy’s 1962 Speech on the US Space Program | C-SPAN Classroom. (n.d.). Retrieved January 28, 2022, from https://www.c-span.org/classroom/document/?7986
  • Note: example videos are cited in respective Examples/ directories

vid2cleantxt's People

Contributors

jonathanlehner avatar pszemraj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

vid2cleantxt's Issues

Spacy

I didn't spot an installation reference, but to get the en_core_web_sm file, you need to run python -m spacy download en_core_web_sm

Demo colab gives an error

I get this error when running th edemo notebook:

data folder is set to/usr/local/lib/python3.7/dist-packages/neuspell/../data` script
timeout Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
425 # Otherwise it looks like a bug in the code.
--> 426 six.raise_from(e, None)
427 except (SocketTimeout, BaseSSLError, SocketError) as e:

32 frames
timeout: The read operation timed out

During handling of the above exception, another exception occurred:

ReadTimeoutError Traceback (most recent call last)
ReadTimeoutError: HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10.0)

During handling of the above exception, another exception occurred:

ReadTimeout Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
527 raise SSLError(e, request=request)
528 elif isinstance(e, ReadTimeoutError):
--> 529 raise ReadTimeout(e, request=request)
530 else:
531 raise

ReadTimeout: HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10.0)
`

No such file or directory: 'ffprobe'

Hello, and thanks for this project, which sounds really helpful. Except that I've to fix several errors to make it work. Latest error was:

warn("Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work", RuntimeWarning)
transcribing...:   0%|                                                                                                                  | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/Users/manu/sw-projects/transcribe.py", line 3, in <module>
    text_output_dir, metadata_output_dir = vid2cleantxt.transcribe.transcribe_dir(
  File "/Users/manu/sw-projects/venv/lib/python3.10/site-packages/vid2cleantxt/transcribe.py", line 663, in transcribe_dir
    transcribe_video_whisper(
  File "/Users/manu/sw-projects/venv/lib/python3.10/site-packages/vid2cleantxt/transcribe.py", line 263, in transcribe_video_whisper
    chunk_directory = prep_transc_pydub(
  File "/Users/manu/sw-projects/venv/lib/python3.10/site-packages/vid2cleantxt/audio2text_functions.py", line 116, in prep_transc_pydub
    vid_audio = AudioSegment.from_file(load_path)
  File "/Users/manu/sw-projects/venv/lib/python3.10/site-packages/pydub/audio_segment.py", line 728, in from_file
    info = mediainfo_json(orig_file, read_ahead_limit=read_ahead_limit)
  File "/Users/manu/sw-projects/venv/lib/python3.10/site-packages/pydub/utils.py", line 274, in mediainfo_json
    res = Popen(command, stdin=stdin_parameter, stdout=PIPE, stderr=PIPE)
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 971, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1847, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'ffprobe'

Does someone know how to fix that?

Error when transcribing

Here it is:
UnboundLocalError: local variable 'PL_out' referenced before assignment

I was running the Colab notebook only changing the video URL to a google drive link.

neuspell initialization not working

  • neuspell unpickling of default models no longer works
  • need to add backup spell-checking function
data folder is set to `/usr/local/lib/python3.7/dist-packages/neuspell/../data` script
Downloading: 100% 208k/208k [00:00<00:00, 632kB/s]
Downloading: 100% 29.0/29.0 [00:00<00:00, 25.8kB/s]
Downloading: 100% 426k/426k [00:00<00:00, 1.03MB/s]
Downloading: 100% 570/570 [00:00<00:00, 504kB/s]

Loading models @ Feb-23-2022_-12-35-31 - may take some time...
if RT seems excessive, try --verbose flag or checking logfile
Downloading: 100% 212/212 [00:00<00:00, 181kB/s]
Downloading: 100% 138/138 [00:00<00:00, 121kB/s]
Downloading: 100% 1.34k/1.34k [00:00<00:00, 1.20MB/s]
Downloading: 100% 291/291 [00:00<00:00, 252kB/s]
Downloading: 100% 85.0/85.0 [00:00<00:00, 70.1kB/s]
Loading hubert model - facebook/hubert-large-ls960-ft
Downloading: 100% 1.18G/1.18G [00:20<00:00, 60.3MB/s]
Downloading: 100% 416M/416M [00:06<00:00, 62.5MB/s]
Traceback (most recent call last):
  File "vid2cleantxt/transcribe.py", line 482, in <module>
    checker = init_neuspell()
  File "/content/vid2cleantxt/vid2cleantxt/audio2text_functions.py", line 396, in init_neuspell
    checker.from_pretrained()
  File "/usr/local/lib/python3.7/dist-packages/neuspell/corrector_sclstmbert.py", line 45, in from_pretrained
    self.model = load_pretrained(self.model, self.weights_path, device=self.device)
  File "/usr/local/lib/python3.7/dist-packages/neuspell/seq_modeling/sclstmbert.py", line 23, in load_pretrained
    checkpoint_data = torch.load(os.path.join(checkpoint_path, "model.pth.tar"), map_location=map_location)
  File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 608, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 777, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '<'.

clean() got an unexpected keyword argument 'fix_unicode'

So excited to get this working! I get the following error when attempting to run the example CLI from the readme (fresh M2-Ultra Mac Studio out of box):

python examples/TEST_folder_edition/dl_src_videos.py
python vid2cleantxt/transcribe.py -i ./examples/TEST_folder_edition/ -m openai/whisper-small

Results in the following error:

% python vid2cleantxt/transcribe.py -i ./examples/TEST_folder_edition/ -m openai/whisper-small
data folder is set to `/Users/guyewhite/anaconda3/lib/python3.11/site-packages/neuspell/../data` script

Loading models @ Jul-29-2023_-16-21-24 - may take some time...
if RT seems excessive, try --verbose flag or checking logfile

Found 10 audio or video files in /Users/guyewhite/Desktop/vid2cleantxt-master/examples/TEST_folder_edition
transcribing...:   0%|                                                   | 0/10 [00:00<?, ?it/s]
Traceback (most recent call last):                                       | 0/41 [00:00<?, ?it/s]
  File "/Users/guyewhite/Desktop/vid2cleantxt-master/vid2cleantxt/transcribe.py", line 807, in <module>
    output_text, output_metadata = transcribe_dir(
                                   ^^^^^^^^^^^^^^^
  File "/Users/guyewhite/Desktop/vid2cleantxt-master/vid2cleantxt/transcribe.py", line 663, in transcribe_dir
    transcribe_video_whisper(
  File "/Users/guyewhite/Desktop/vid2cleantxt-master/vid2cleantxt/transcribe.py", line 263, in transcribe_video_whisper
    chunk_directory = prep_transc_pydub(
                      ^^^^^^^^^^^^^^^^^^
  File "/Users/guyewhite/Desktop/vid2cleantxt-master/vid2cleantxt/audio2text_functions.py", line 123, in prep_transc_pydub
    preamble = trim_fname(_vid2beconv)
               ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/guyewhite/Desktop/vid2cleantxt-master/vid2cleantxt/v2ct_utils.py", line 273, in trim_fname
    clean_name = cleantxt_wrap(current_name)  # helper fn to clean up text
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/guyewhite/Desktop/vid2cleantxt-master/vid2cleantxt/v2ct_utils.py", line 229, in cleantxt_wrap
    cleaned_text = clean(
                   ^^^^^^
TypeError: clean() got an unexpected keyword argument 'fix_unicode'
Creating .wav audio clips:   0%|                                         | 0/41 [00:00<?, ?it/s]

Do not translate when using a different language model

Hey thanks a lot for this work, this is great. However, I wanted to use a different model (rjac/whisper-tiny-spanish) to transcribe a video in spanish and it did! but it translated the whole thing to english. Could this be skipped?

v2ct_utils.py

For this to work, the v2ct_utils.py should be in the vid2cleantxt/ subfolder

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.