Comments (5)
You cannot load a tokenizer.model, you need to write a converter.
This is because it does not come from the tokenizers
library but from either tiktoken
or sentencepiece
and there is no secret recipe. We need to adapt to the content of the file, but this is not super straight forward.
https://github.com/huggingface/transformers/blob/main/src/transformers/convert_slow_tokenizer.py#L544 is the simplest way to understand the process!
from tokenizers.
Ok, I understand. Do you know of a way or a library to do this in Rust without reaching for the Python transformers converter?
from tokenizers.
A library no, but we should be able to come up with a small rust code to do this ๐
from tokenizers.
@ArthurZucker are there any specifications or example loaders which I can look at to implement this?
from tokenizers.
I also have the same question, for llava reasons๐
from tokenizers.
Related Issues (20)
- Llama3 tokenizer with Incorrect offset_mapping HOT 2
- Why the tokenizer is slower than tiktoken? HOT 3
- Why are 'unknown' tokens randomly added to my tokenized input? HOT 2
- Error: Cannot find module 'tokenizers/bindings/tokenizer' HOT 1
- โGet stats (e.g. counts) about the merged pairs HOT 3
- Convert huggingface tokenizer into sentencepiece format HOT 2
- How to write custom Wordpiece class? HOT 2
- Link to download the training text in `docs/source/quicktour.rst` is broken HOT 2
- Special token handling breaks idempotency of sentencepiece due to extra spaces HOT 4
- Strange warnings with tokenizer for some models HOT 5
- Bug with `CodeQwen1.5`: `data did not match any variant of untagged enum PyPreTokenizerTypeWrapper` HOT 1
- Converting `tokenizers` tokenizers into `tiktoken` tokenizers HOT 4
- How to Batch-Encode Paired Input Sentences with Tokenizers: Seeking Clarification HOT 1
- How to allow the merging of consecutive newline tokens \n when training a byte-level bpe tokenizer? HOT 3
- [BUG]Might be a bug in Unigram Trainer
- Training HuggingFace tokenizer - ignore_merges HOT 1
- "from_pretrained" read wrong config file. not "tokenizer_config.json", but "config.json"
- Memory leak for large strings HOT 1
- Deserializing BPE tokenizer failure HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tokenizers.