GithubHelp home page GithubHelp logo

Comments (10)

Blightbuster avatar Blightbuster commented on August 27, 2024 2

Such a json file would be a nice addition. Perhaps it could even include fields for different ISO notations like ISO 639-2/B and ISO 639-3.

from tessdata_best.

michaelkubina avatar michaelkubina commented on August 27, 2024

Sorry to bother again...i have now looked into this document (https://github.com/tesseract-ocr/tesseract/blob/main/doc/tesseract.1.asc#LANGUAGES) as well but here it states, that frk does actually stand for Frankish, which would also be an ISO 639-3 code just like kmr(kurmandschi). This would leave the osd as the non-language, and the rest as proper ISO 639-2/T languagecodes.

This on the other hand would mean, that the documenation at https://tesseract-ocr.github.io/tessdoc/Data-Files-in-different-versions.html would describe frk wrongfully as "German Fraktur", whilst Fraktur in general has been moved to the scripts subfolder.

So its not clear, if frk stands for frankish or german fraktur...

from tessdata_best.

stweil avatar stweil commented on August 27, 2024

Related issue: tesseract-ocr/tessdata_fast#28

from tessdata_best.

stweil avatar stweil commented on August 27, 2024

frk was for a long time described as being "Frankish", but the included dictionary, the character set and the fonts used for training indicate that it is de facto a model for German Fraktur.

script/Fraktur is similar, but covers all Western European languages (so also includes characters with French accents for example).

from tessdata_best.

stweil avatar stweil commented on August 27, 2024

The current developers (including myself) are not related to Google where the current OCR models where trained, and details of the training process and the reasons for the existing names are therefore unknown.

Meanwhile there also exist models which were trained outside of Google, for example better models for Fraktur and other historic scripts which I trained at UB Mannheim.

Do you have suggestions how the current situation could be improved? Personally I'd like to have a JSON file which describes all OCR models, their names, their meaning (maybe with translations), comments, download link and other metadata).

from tessdata_best.

M3ssman avatar M3ssman commented on August 27, 2024

Another guess regarding the data included in the standard frk model: since the notably large word list (+450.000 entries, which is really hard to gain with real historical print data) contains very unlikely terms (DSL-HARDWARE, AZ-Web.de or IT-Fortbildung, to mention a few) plus the list of used fonts assume that is has been trained like the standard en model from synthetic image-line pairs and text tokens from the web. (Default punctuation and number files fit in with their poor quality.)

from tessdata_best.

stweil avatar stweil commented on August 27, 2024

That's right, all standard models where trained with synthetic line images. And as you noted, the text look like a collection of various texts from the web, maybe using language indicators from the HTML code which also results in dictionaries which are not necessarily representative for the time and language. And sometimes even important characters were missing in that texts, so they never were trained.

from tessdata_best.

michaelkubina avatar michaelkubina commented on August 27, 2024

Meanwhile there also exist models which were trained outside of Google, for example better models for Fraktur and other historic scripts which I trained at UB Mannheim.

I already use GT4HistOCR and i really appreciate your hard work. it works very well...even outside of the whole OCR-D space.

Do you have suggestions how the current situation could be improved? Personally I'd like to have a JSON file which describes all OCR models, their names, their meaning (maybe with translations), comments, download link and other metadata).

I agree with you, that we need some kind of description for the different models and a JSON would suit this very well, since it can be easily read by humans and machines. You have already mentioned a important set of information and with your expertise you will most certainly come up with a plethora of other important metadata. I think apart from the name, the "trainer" and his/hers contact, download links or links to the training data that you have mentioned, @M3ssman stated the important fact of wether its from real image data or from a synthetic image set - so a clear description on how the model was actually trained and in which depth needs to find its way in there as well.

...and i agree with @Blightbuster that there need to be fields for the models target languag(es). I would prefer not only to state which ISO 639-2B/T and ISO 639-3 it refers to, but also a fallback to ISO 639-1. It would take away some struggle of doing a mapping for projects, where the database holds the language code in ISO 639-1.

IMO we would additionally need an infomation about which writing system it was trained on/for. Here we could use the ISO 15924 standard (https://en.wikipedia.org/wiki/ISO_15924), which is partially used in naming the models for the scripts as well. IETF Language Tags (https://en.wikipedia.org/wiki/IETF_language_tag) are an interesting concept, but i am not quite happy, that the languagecode is preferably in ISO 639-1.

But this could inspiring for a more descriptive filename-convention for the models...especially when it comes to languages, that use different writing systems, e.g. Azerbaijan with Latin script since the independence from the soviet union (prior cyrillic) and persian script in its southern region ("always"). Similiar for German with Fraktur or Latin script, Serbian (Cyrillic & Latin) etc. So a mixture of both the ISO 639-2/T and the ISO 15924 could work here in our favor naming at least the standard models and giving some insights on whats currently hidden or just implied:

  • aze_Latn, aze_Cyrl, aze_Arab
  • deu_Latf, deu_Latn or deu_Latf_Latn
  • srp_Cyrl, srp_Latn or srp_Cyrl_Latn

So, i believe that a clear naming convention for the models and a descriptive JSON for the metadata would help a lot. And it could be used to automatically aggregate such informations for the documentation pages.

As for my initial question, i will keep in mind, that frk does actually stand for Fraktur.

from tessdata_best.

unhammer avatar unhammer commented on August 27, 2024

the rest as proper ISO 639-2/T languagecodes

Just to clarify, does that mean that nor is trained on both nno and nob text?

from tessdata_best.

tfmorris avatar tfmorris commented on August 27, 2024

IETF Language Tags (https://en.wikipedia.org/wiki/IETF_language_tag) are an interesting concept, but i am not quite happy, that the languagecode is preferably in ISO 639-1.

Although the IETF BCP 47 rule of using the shortest available code, making for variable length codes, is a little awkward, BCP 47 codes are common on the web and are also used by things like the fastText language identification module (even though they're mistakenly called ISO codes on the model page), so I think they're important to include.

Wikidata is a good way to be able to easily crosswalk the different codes, as well as access other useful information, like autonym, translated versions of the name, etc., and might be a good link to include in the metadata to allow easy access to additional information.

from tessdata_best.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.