GithubHelp home page GithubHelp logo

Comments (4)

JWittmeyer avatar JWittmeyer commented on June 14, 2024 1

Thanks for the explanation, that helped clearing the confusion on my end and i know how to proceed for my usecase.


In case anyone ever stumbles upon this, here the code i went with for byte splitting (though probably still has a lot of optimization potential)

# splits not after x bytes but ensures that max x bytes are used without destroying the final character 
def __chunk_text_on_bytes(text: str, max_chunk_size: int = 1_000_000):
    factor = len(text) / __utf8len(text)
    increase_by = int(max(min(max_chunk_size*.1,10),1))
    initial_size_guess = int(max(max_chunk_size * factor - 10,1))
    final_list = []
    remaining = text
    while len(remaining):
        part = remaining[:initial_size_guess]
        if __utf8len(part) > max_chunk_size:
            initial_size_guess = max(initial_size_guess - min(max_chunk_size *.001,10),1) 
            continue
        cut_after = initial_size_guess
        while __utf8len(part) < max_chunk_size and part != remaining:
            cut_after = min(len(remaining), cut_after+increase_by)
            part = remaining[:cut_after]
            
        if __utf8len(part) > max_chunk_size:
            cut_after-=increase_by
        final_list.append(remaining[:cut_after])
        remaining = remaining[cut_after:]

    return final_list

from spacy.

adrianeboyd avatar adrianeboyd commented on June 14, 2024

nlp.max_length is not a hard internal constraint, but rather a kind of clunky way to protect users from confusing OOM errors. It was set with the "core" pipelines and a not-especially-new consumer laptop in mind. If you're not actually running out of memory on your system, you can increase it with no worries, especially for simpler tasks like tokenization only.

On the other hand, none of the components in a core pipeline benefit from very long contexts (typically a section or a page or even a paragraph is sufficient), so splitting up texts is often the best way to go anyway. Very long texts can use a lot of RAM, especially for parser or ner.

This limit for Japanese is completely separate from nlp.max_length and is coming directly from sudachipy. (I actually hadn't encountered it before.)

Their error message seems fine (much better than an OOM message with a confusing traceback from the middle of the parser), so I don't know if it makes sense to us to add another check in the spacy Japanese tokenizer, which then might get out-of-sync with the upstream sudachipy constraints in the future.

But you're right that nlp.max_length isn't going to help directly for limiting the length in bytes, well, unless you set it much lower. But again, a lower limit would probably be fine in practice.

We'll look at adding this to the documentation!

from spacy.

starlabman avatar starlabman commented on June 14, 2024

Existing documentation

"""
...
max_length (int): The maximum allowed length of text for processing.
...
"""

Updated documentation

"""
...
max_length (int): The maximum allowed length of text for processing. The behavior of max_length may vary for different languages. Please refer to the language-specific documentation for more details.
...

from spacy.

adrianeboyd avatar adrianeboyd commented on June 14, 2024

Thanks for the suggestion! I think that this description is slightly confusing for users, since nlp.max_length itself will behave the same way for all languages. What we need to highlight is that some individual tokenizers or components, especially those that wrap third-party libraries, may have their own internal length restrictions.

from spacy.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.