GithubHelp home page GithubHelp logo

Comments (14)

ali1234 avatar ali1234 commented on July 17, 2024

You probably did not record long enough to get every possible pattern. You
need a couple of hours.

On 14 Aug 2016 22:16, "grim-fandango" [email protected] wrote:

Hi Alistair,

I piped the output from training -g to raspi-teletext, recorded it on
Betamax, then captured it using vbicat. I then did a 'training -t' on the
vbi file, then on the output of that did --parity, --hamming and --full
(not sure if 'full' corresponds to debruijn, but that's a different matter).

The hamming file isn't a multiple of 1024 so PatternCUDA won't accept it.
Am I doing something wrong or is there an issue with the training script?

Thanks

Jason


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#3, or mute the thread
https://github.com/notifications/unsubscribe-auth/AAnywnegxQvcGeApRQRy5T6f782hir6Rks5qf4WogaJpZM4Jj_RJ
.

from vhs-teletext.

grim-fandango avatar grim-fandango commented on July 17, 2024

I gave it two hours' worth, but I get the same error. The file is smaller too, at 762,489 bytes as opposed to 819,214.

from vhs-teletext.

ali1234 avatar ali1234 commented on July 17, 2024

Do you get rejects when running training -t?

from vhs-teletext.

ali1234 avatar ali1234 commented on July 17, 2024

The final training data files have a 14 byte header and then 25 bytes per recognized pattern. The hamming data should have 32768 patterns and the parity data should have 4096. Both multiples of 1024.

14 + (25 * 32768) = 819214
14 + (25 * 30499) = 762489

So you don't have a full set of patterns for some reason. That can only happen if some patterns weren't found in the raw training samples.

from vhs-teletext.

grim-fandango avatar grim-fandango commented on July 17, 2024

It started out at 1% rejects then goes to 0%. When complete, it says: "1:30:11 : 5815928 lines, 1075/s total, 1073/s teletext, 0% rejected. "

If there are 1075 lines/s and 1073 lines/s are teletext, then it would seem that the rejection rate is actually 0.19%.

That was for a 2.5 hour sample.

from vhs-teletext.

moonhouse avatar moonhouse commented on July 17, 2024

See

sys.stderr.write('%d:%02d:%02d : %d lines, %.0f/s total, %.0f/s teletext, %.0f%% rejected. \r' % (h, m, s, self.total, total_lines_sec, teletext_lines_sec, rejects_percent))

Format string is '%.0f%%'. 0.19% will be displayed as 0%.

from vhs-teletext.

ali1234 avatar ali1234 commented on July 17, 2024

That is an acceptable error rate so I am not sure what is happening. The checksumming should prevent the pattern from being read incorrectly (eg with an offset).

What do your intermediates look like?

from vhs-teletext.

grim-fandango avatar grim-fandango commented on July 17, 2024

Well, I've just run it again and it's generated a file of the right length. Not sure what was different this time. But thanks for looking at it and sorry for the wild goose chase. :-/

from vhs-teletext.

ali1234 avatar ali1234 commented on July 17, 2024

That is extremely strange. To be honest I have forgotten how you even use the trainer bits. Any chance you could write some docs for it?

from vhs-teletext.

grim-fandango avatar grim-fandango commented on July 17, 2024

I can only think that I ran it previously on Windows, although why that would matter I don't know.

My current problem is generating the parity training tables, as the training script literally grinds the machine to a halt after a while and I have to hard reset. I guess it's swallowed up all available RAM.

Sure, I'll write some docs, but I don't know much! Seems weird to write docs for it if I've not got it working yet.

from vhs-teletext.

ali1234 avatar ali1234 commented on July 17, 2024

That is quite possible. It bucket sorts the packets in memory and then takes the average for each pattern at the end, so it will basically load the entire training file into RAM, which would be 10GB if you have 5 million lines. I had 16GB of RAM when I wrote this but I've upgraded since then!

from vhs-teletext.

grim-fandango avatar grim-fandango commented on July 17, 2024

Yeah, just had a look at the code, there doesn't seem to be an easy way to spilt it up as it seems to require that it's all in RAM as you say. My training tables file is 29GB from a 14,894,235,648 byte VBI file. If each frame is 64K and contains 32 lines, then there will be 7.2M lines! There's 16GB RAM in this machine.

I have to try to chop down the original VBI file and run it again - I did about 2.5 hours of recording to get a file that size; maybe I over-egged it a bit!

from vhs-teletext.

ali1234 avatar ali1234 commented on July 17, 2024

I might be wrong about the specifics actually. But for sure it needs a lot of RAM and will take a long time.

from vhs-teletext.

ali1234 avatar ali1234 commented on July 17, 2024

Here is a braindump about how training works:

https://github.com/ali1234/vhs-teletext/blob/master/TRAINING.md

(I wrote it mostly from memory so it might be wrong.)

from vhs-teletext.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.