GithubHelp home page GithubHelp logo

Comments (15)

abheesht17 avatar abheesht17 commented on August 22, 2024 1

Any updates here?

#303

from keras-nlp.

abheesht17 avatar abheesht17 commented on August 22, 2024

Nice! Should I draw up a rough implementation and share a Colab notebook?

from keras-nlp.

mattdangerw avatar mattdangerw commented on August 22, 2024

This is a pretty important feature, as it will unlock some important models and is widely used.

However, there are some technical roadblocks here currently. We would like to keep our tokenizer running inside the tensorflow graph using tensorflow ops, and currently the tokenization ops are all provided by tf-text.

There is not a BPE tokenizer offered by tf text, but in theory SentencePiece should be configurable in a way that is compatible. See tensorflow/text#763

The first thing to do would be to see if that is possible. Try configuring SentencePiece tokenizer for tf text and see if it can be configured to be actually compatible with the tokenizers for gpt-2 and roberta (testing against huggingface tokenizers would probably be the simplest to do this). A colab showing compatibility would "unblock" this work, and if it's not possible currently we may have to apply some fixes to tf-text and sentencepiece.

From there we could produce a design that would essentially hide the complexity of sentence piece under the hood. We would need to think about the vocab format we provide (a vocab and merges file?).

from keras-nlp.

mattdangerw avatar mattdangerw commented on August 22, 2024

@abheesht17 you are definitely welcome to help with this! This will require some diving into other libraries, to understand the support we have today.

from keras-nlp.

abheesht17 avatar abheesht17 commented on August 22, 2024

Great, will do 👍🏼

from keras-nlp.

abheesht17 avatar abheesht17 commented on August 22, 2024

Hey, @mattdangerw. I went through this issue. So, essentially, this is what you want me to do:

  1. Use the SentencePiece library, and configure it so as to train a byte-level BPE tokeniser. Use a small text corpus for training.
  2. Use the .model file obtained after training and pass it to TensorFlow Text's SentencePiece tokeniser class.
  3. Now, use the same corpus and train Hugging Face's GPT-2 Tokeniser, and check whether the vocabulary obtained is similar, and check the output on a few input samples.

Is this correct?

from keras-nlp.

mattdangerw avatar mattdangerw commented on August 22, 2024

I'm not sure we need to actually train a sentence piece model, though that might help understand things.

Basically, the public API we can rely on that might give us the op support we need is tf text's SentencepieceTokenizer, but that takes a sentence piece model proto as input.

End users will want probably want to use this layer with "vocab json" and "merges txt" files provided by official gpt/roberta githubs or huggingface. We can keep thinking about the file format we would want, but asking end users to construct a sentence piece model is probably a non-starter.

So, the question we could try to answer is can we manually construct a sentence piece model proto from gpt vocab and merge files in a way that's compatible. If so, we could build this layer on top of the existing tf text API, and not rule out more direct support from tf text in the future. If not, we will need to go back to the drawing board a little bit and figure out how to get op level support here.

So putting that into a list:

  1. Start with a vocab and merges files for say gpt2.
  2. Generate some correct output for some sample text (probably easiest to use huggingface here? could also try using the tokenizer impl from gpt2 github)
  3. Try building a tf text SentencepieceTokenizer from those files that matches the real tokenizer output.

It may turn out we are more blocked here than we think from tensorflow/text#763, but this would be the way to find out.

from keras-nlp.

abheesht17 avatar abheesht17 commented on August 22, 2024

Ah, understood. Thanks for clarifying!

from keras-nlp.

abheesht17 avatar abheesht17 commented on August 22, 2024

Some useful articles about how Hugging Face tokenises the input text (given vocab.txt and merges.txt):

huggingface/transformers#1083 (comment)
huggingface/transformers#4777

  1. Tokenise text using merges.txt
  2. Map the tokens to indices using vocab.json

from keras-nlp.

abheesht17 avatar abheesht17 commented on August 22, 2024

Hey, @mattdangerw. Sorry for the delay, forgot about it. I opened an issue on the SentencePiece repository: google/sentencepiece#739. The author of the repo mentions this: "manual model modification/creation is totally unsupported."

However, looks like we may be able to add tokens from the vocab to the pieces attribute. I don't think they have Python wrappers/APIs for adding "pieces". However, they do have a function in C++, AddPieces. See this unit test: https://github.com/google/sentencepiece/blob/bc53923a9147dc8ffa54034c8ed774de78cc4d39/src/bpe_model_test.cc#L52. I'll try to use this function, and reproduce the output we get using HF. Give me a day or two.

from keras-nlp.

aleemkhan62 avatar aleemkhan62 commented on August 22, 2024

Hi All,

Just curious if anyone has found any sort of work around for this issue. My conclusion after reading related issues is that its not currently possible to incorporate popular BPE tokenizers (roberta/GPT2) within tensorflow-text pipelines?

from keras-nlp.

chenmoneygithub avatar chenmoneygithub commented on August 22, 2024

@aleemkhan62 Currently you can use BPE via tf_text.SentencePieceTokenizer only if you have a pretrained model proto. We are looking into a better solution on it! please stay tuned, thanks!

from keras-nlp.

mattdangerw avatar mattdangerw commented on August 22, 2024

To add a little more color for others finding this issue, you can train a BPE-style vocabulary with sentecepiece today, and a sentencpiece model can be used with tensorflow text, or the SentencePieceTokenizer in this library. However than might not have the exact behavior as roberta/gpt2 tokenization.

We are currently working on a way to support the actual vocabulary files used by roberta/gpt2 (merges.txt and vocab.json), with exactly equivalent tokenization, running inside the tf graph.

from keras-nlp.

piEsposito avatar piEsposito commented on August 22, 2024

Any updates here?

from keras-nlp.

mattdangerw avatar mattdangerw commented on August 22, 2024

Closing this! We have an implementation released -> https://keras.io/api/keras_nlp/tokenizers/byte_pair_tokenizer/

If anyone encounters issue with the tokenizer, please file a bug!

from keras-nlp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.