Comments (4)
Thanks for the explanation, that helped clearing the confusion on my end and i know how to proceed for my usecase.
In case anyone ever stumbles upon this, here the code i went with for byte splitting (though probably still has a lot of optimization potential)
# splits not after x bytes but ensures that max x bytes are used without destroying the final character
def __chunk_text_on_bytes(text: str, max_chunk_size: int = 1_000_000):
factor = len(text) / __utf8len(text)
increase_by = int(max(min(max_chunk_size*.1,10),1))
initial_size_guess = int(max(max_chunk_size * factor - 10,1))
final_list = []
remaining = text
while len(remaining):
part = remaining[:initial_size_guess]
if __utf8len(part) > max_chunk_size:
initial_size_guess = max(initial_size_guess - min(max_chunk_size *.001,10),1)
continue
cut_after = initial_size_guess
while __utf8len(part) < max_chunk_size and part != remaining:
cut_after = min(len(remaining), cut_after+increase_by)
part = remaining[:cut_after]
if __utf8len(part) > max_chunk_size:
cut_after-=increase_by
final_list.append(remaining[:cut_after])
remaining = remaining[cut_after:]
return final_list
from spacy.
nlp.max_length
is not a hard internal constraint, but rather a kind of clunky way to protect users from confusing OOM errors. It was set with the "core" pipelines and a not-especially-new consumer laptop in mind. If you're not actually running out of memory on your system, you can increase it with no worries, especially for simpler tasks like tokenization only.
On the other hand, none of the components in a core pipeline benefit from very long contexts (typically a section or a page or even a paragraph is sufficient), so splitting up texts is often the best way to go anyway. Very long texts can use a lot of RAM, especially for parser
or ner
.
This limit for Japanese is completely separate from nlp.max_length
and is coming directly from sudachipy. (I actually hadn't encountered it before.)
Their error message seems fine (much better than an OOM message with a confusing traceback from the middle of the parser), so I don't know if it makes sense to us to add another check in the spacy Japanese tokenizer, which then might get out-of-sync with the upstream sudachipy constraints in the future.
But you're right that nlp.max_length
isn't going to help directly for limiting the length in bytes, well, unless you set it much lower. But again, a lower limit would probably be fine in practice.
We'll look at adding this to the documentation!
from spacy.
Existing documentation
"""
...
max_length (int): The maximum allowed length of text for processing.
...
"""
Updated documentation
"""
...
max_length (int): The maximum allowed length of text for processing. The behavior of max_length may vary for different languages. Please refer to the language-specific documentation for more details.
...
from spacy.
Thanks for the suggestion! I think that this description is slightly confusing for users, since nlp.max_length
itself will behave the same way for all languages. What we need to highlight is that some individual tokenizers or components, especially those that wrap third-party libraries, may have their own internal length restrictions.
from spacy.
Related Issues (20)
- Spacy defines Noun as a root for a sentence HOT 1
- CUDA Runtime Error during Spacy Transformers NER Model Training HOT 2
- Spacy-LLM code sample produces no output HOT 16
- Working with the new `._.trf_data` object (3.7+) HOT 7
- spacy.load error decorative function HOT 1
- The en_core_web_trf model results in zero output HOT 2
- No such command 'fill-curated-transformer' HOT 4
- MemoryError: Unable to allocate 29.7 GiB for an array with shape (86399, 4, 4, 2880, 2) and data type float32 HOT 2
- Issue when calling spacy info HOT 3
- Spcay recoginize similar words into different entities
- Publish `requirements.txt` or `environment.yml` for installing latest versions of spacy and other dependencies HOT 1
- Downloading model error: ModuleNotFoundError: No module named 'spacy.symbols' HOT 1
- README.md Link Suggestions HOT 2
- English models' Accuracy Evaluation values HOT 1
- Spacy high memory consumption issue HOT 1
- Spacy dependency requirements are not met for pydantic versions ^2 HOT 1
- bus error upon existing the program after using spacy on mac M1 HOT 7
- DocBin.to_bytes fails with a "ValueError: bytes object is too large" Spacy v 3.7.2
- Doc Report HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from spacy.