GithubHelp home page GithubHelp logo

Comments (18)

LL-AI-dev avatar LL-AI-dev commented on August 24, 2024 1

@P15V make sure that there are no line breaks between the curly brackets. I believe this will solve your error

nemo manifests despite being saved as .json are a bit fiddly at times and cannot be directly loaded in via the json package. each line within the manifest should be a complete json string.

I think the error is occurring because it is trying to read the 2nd line as a json string, but because you have line breaks within the curly brackets the 2nd line is not a valid json string on its own.

from nemo.

titu1994 avatar titu1994 commented on August 24, 2024 1

This is the right answer, we read a file as a jsonl, though we (wrongly) call it a .json file. We'll attempt to make the parsing logic a bit more robust in the future, and log an appropriate warning instead of crashing. Fyi @stevehuang52

from nemo.

krishnacpuvvada avatar krishnacpuvvada commented on August 24, 2024 1

And directly fixing the errors with the JSON file directly, leads to this error:
"-packages/nemo/collections/common/data/lhotse/nemo_adapters.py", line 84, in iter
text=data[self.text_field],
KeyError: 'answer'

to fix this, please modify the input lines file to add 'answer' field.

{
  "audio_filepath": "PathRemovedDueToPersonalName",  
  "duration": 30.0,  
  "taskname": "asr",  
  "source_lang": "en", 
  "target_lang": "en", 
  "pnc": 'yes', 
  "answer": 'na',
}

also, we recently updated .transcribe signature, so if you are using main branch

transcript = canary_model.transcribe(paths2audio_files="/home/pjstimac/NvidiaCanaryTest/transcribe_manifest.json", batch_size=16)

should be updated to
transcript = canary_model.transcribe(audio="/home/pjstimac/NvidiaCanaryTest/transcribe_manifest.json", batch_size=16)

from nemo.

titu1994 avatar titu1994 commented on August 24, 2024 1

The notebook above is not the way to do inference for Canary, it's for beam search with CTC models, and is a deprecated notebook in general.

from nemo.

titu1994 avatar titu1994 commented on August 24, 2024 1

Glad it worked. We'll iron out these issues in the pre-release. It shouldn't be so difficult to do inference

from nemo.

Suma-Rajashankar avatar Suma-Rajashankar commented on August 24, 2024 1

import nemo.collections.asr as nemo_asr
nemoasr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("nvidia/canary-1b")
nemoasr_model.transcribe(['AudioClipDirectly.wav'])

I tried running this script, but when I transcribe the audio, the transcription initially for the first two lines is fine and after that the same sentence keeps repeating again and again. The entire audio is not transcribed too. Any help on this would be appreciated. Thank you.

from nemo.

P15V avatar P15V commented on August 24, 2024 1

@Suma-Rajashankar How long are the audio clips you are inputting, and at what sample rate? I was working with Whisper previously, so I already had all my audio cut into 30-second chunks, capped at 16kHz. I, too, have experienced repetition with specific clips (And with whispers, too). But something seems off it it is every single time. I used to get that with Whisper trying to feed it audio clips longer than 30 seconds each, it would constantly repeat after the first or second sentence. But newer updates to Whisper seemed to have improved that; Have not tried the same with Canary myself.

from nemo.

stevehuang52 avatar stevehuang52 commented on August 24, 2024 1

Hi @Suma-Rajashankar @P15V , we have a script to automatically chunk the long audios and perform inference on each chunk. Please feel free to try it and let us know if there's any issue.

from nemo.

stevehuang52 avatar stevehuang52 commented on August 24, 2024 1

@Suma-Rajashankar sorry I forgot to mention that you'll need to install the current main branch of NeMo, not 1.23.0

from nemo.

P15V avatar P15V commented on August 24, 2024

Hello all,

Thanks for your time & replies; I can't express how much I genuinely appreciate it!! :)

So after I made this post, I went home and tried all night on my personal time; still no luck unfortunately.

I found that "answer: 'na' " key via the Hugging face documentation and included that.

@krishnacpuvvada, @LL-AI-dev Running that format JSON, with the updated transcript variable, still prints out the json error:
"json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 1 (char 2)"

I go off the error, and correct that with the JSON, and still get this error (and ran into this last night as well)
" assert isinstance(cut, MonoCut), "Expected MonoCut."
AssertionError: Expected MonoCut."

Tried both a JSON & JSONL file, same result :
" assert isinstance(cut, MonoCut), "Expected MonoCut."
AssertionError: Expected MonoCut."

My updated code with that updated variable:
"" # Load Canary model
from nemo.collections.asr.models import EncDecMultiTaskModel
canary_model = EncDecMultiTaskModel.from_pretrained('nvidia/canary-1b')

transcript = canary_model.transcribe(audio="/home/NameRemoved/NvidiaCanaryTest/transcribed_manifest.jsonl", batch_size=16) ""

Last night, I thought, let's try the tutorial Google Colab Notebooks right from the Nvidia website for any NeMo model...
Not even that could run all the way through right on google colab.

"https://colab.research.google.com/github/NVIDIA/NeMo/blob/stable/tutorials/asr/Offline_ASR.ipynb"

It was error out at this variable "paths2audio_files=files"

Thanks for everyone's time; much appreciated!!!! :D

from nemo.

P15V avatar P15V commented on August 24, 2024

@titu1994 Well that would explain it! After much trial and error, I finally got it running in a notebook & python shell. with 3 lines of code, skipping the JSON/JSONL manifest entirely:
""
import nemo.collections.asr as nemo_asr
nemoasr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("nvidia/canary-1b")
nemoasr_model.transcribe(['AudioClipDirectly.wav'])
""

Once that was working, I just wrote a Python loop to loop through an audio directory and output the transcription results to a JSON for my viewing/model comparison setup app.

Thanks for all the attempts at help though, @titu1994 & @krishnacpuvvada. I genuinely appreciate it!! Wish the documentation was better so I would not have had to bother you guys, oh well.

be well!!! :D

from nemo.

P15V avatar P15V commented on August 24, 2024

@titu1994 That would be so great to see!! I've been playing around with Whisper for the past few months, and this was unexpectedly annoying to get going in comparison.
the code is simple enough looking at it; the documentation though, is a different story. But it's going now on my end, yay!! :) Thanks again for the input and help @krishnacpuvvada & @titu1994 , much appreciated!!

from nemo.

github-actions avatar github-actions commented on August 24, 2024

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.

from nemo.

github-actions avatar github-actions commented on August 24, 2024

This issue was closed because it has been inactive for 7 days since being marked as stale.

from nemo.

Suma-Rajashankar avatar Suma-Rajashankar commented on August 24, 2024

@P15V, thank you for your reply. My audio clips are between 30 - 60 mins and are all sampled at 16kHz. I had no issues with Whisper, but like you said I will chunk my audio into 30 seconds and work on this using the Canary model. Thanks once again.

from nemo.

Suma-Rajashankar avatar Suma-Rajashankar commented on August 24, 2024

Thanks @stevehuang52 for your reply. Looking into this now. Will keep you posted. Appreciate your help.

from nemo.

Suma-Rajashankar avatar Suma-Rajashankar commented on August 24, 2024

Hi @stevehuang52, I am unable to import 'FrameBatchMultiTaskAED' from 'nemo.collections.asr.parts.utils.streaming_utils'. I have installed nemo_toolkit==1.23.0. Is there some issue with this version?

from nemo.

Suma-Rajashankar avatar Suma-Rajashankar commented on August 24, 2024

@stevehuang52, thanks very much. Will work on this and keep you posted. Appreciate your help.

from nemo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.