GithubHelp home page GithubHelp logo

gpt2-quickly's Introduction

GPT2 Quickly

Build your own GPT2 quickly, without doing many useless work.

Build

This project is base on 🤗 transformer. This tutorial show you how to train your own language(such as Chinese or Japanese) GPT2 model in a few code with Tensorflow 2.

You can try this project in colab right now.

Main file


├── configs
│   ├── test.py
│   └── train.py
├── build_tokenizer.py
├── predata.py
├── predict.py
└── train.py

Preparation

virtualenv

git clone [email protected]:mymusise/gpt2-quickly.git
cd gpt2-quickly
python3 -m venv venv
source venv/bin/activate

pip install -r requirements.txt

Install google/sentencepiece

0x00. prepare your raw dataset

this is a example of raw dataset: raw.txt

0x01. Build vocab

python cut_words.py
python build_tokenizer.py

0x02. Tokenize

python predata.py --n_processes=2

0x03 Train

python train.py

0x04 Predict

python predict.py

0x05 Fine-Tune

ENV=FINETUNE python finetune.py

gpt2-quickly's People

Contributors

mymusise avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

gpt2-quickly's Issues

Sentencepiece may not be training properly

Hey there! I have just realised bpe+jieba may not work as well as expected.

Example:

Set vocab_size to 4000 (very close to the minimum vocab_size requirement):

['▁却', '发', '现', ',', '远', '处', '的', '那', '座', '教', '堂', '轰', '然', '倒', '塌', ',', '完', '全', '成', '为', '了', '废', '墟', ',', '记', '忆', '深', '处', '所', '埋', '藏', '的', '那', '些', '东', '西', ',', '如', '同', '潮', '水', '一', '般', '向', '她', '涌', '来', '。']

Set vocab_size to 40000:

['▁却', '发现', ',', '远', '处', '的', '那', '座', '教', '堂', '轰', '然', '倒', '塌', ',', '完', '全', '成', '为', '了', '废', '墟', ',', '记', '忆', '深处', '所', '埋', '藏', '的', '那', '些', '东西', ',', '如', '同', '潮', '水', '一般', '向', '她', '涌', '来', '。']

Comparing with using only bpe without jieba (ie training with raw.txt), vocab_size = 40000:

['▁', '却发现', ',', '远处', '的那', '座', '教堂', '轰然倒塌', ',', '完全', '成为了', '废墟', ',', '记忆', '深处', '所', '埋藏', '的那些', '东西', ',', '如同', '潮水', '一般', '向她', '涌来', '。']

Alternatively, word+jieba also seems to work:

['▁却', '▁发现', '▁,', '▁远处', '▁的', '▁那座', '▁教堂', '▁轰然', '▁倒塌', '▁,', '▁完全', '▁成为', '▁了', '▁废墟', '▁,', '▁记忆', '▁深处', '▁所', '▁埋藏', '▁的', '▁那些', '▁东西', '▁,', '▁如同', '▁潮水', '▁一般', '▁向', '▁她', '▁涌来', '▁。']

Sentencepiece alternative

Amazing work! Currently the spmtrain in build_tokenizer doesn't work, cuz I think it needs a local installation of sentencepiece to be able to use the command. Is there a specific reason you chose Google's sentencepiece over just:

from tokenizers import SentencePieceBPETokenizer
tokenizer = SentencePieceBPETokenizer()
tokenizer.train(files=paths, vocab_size=3000, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.save(os.path.join(configs.data.path, "spiece.model"))

Also, I was wondering did fast attention work for you?

Huggingface model ValueError

I found this problem this morning, and it worked fine yesterday. I think maybe the model checkpoint mymusise/gpt2-medium-chinese was accidentally overwritten on the Huggingface.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-2-18d2686f4db3> in <module>()
      3 tokenizer = BertTokenizer.from_pretrained("mymusise/gpt2-medium-chinese")
      4 
----> 5 model = TFGPT2LMHeadModel.from_pretrained("mymusise/gpt2-medium-chinese")

3 frames
<__array_function__ internals> in reshape(*args, **kwargs)

/usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py in _wrapfunc(obj, method, *args, **kwds)
     59 
     60     try:
---> 61         return bound(*args, **kwds)
     62     except TypeError:
     63         # A TypeError occurs if the object does have such a method in its

ValueError: cannot reshape array of size 6160128 into shape (8021,1024)

BTW, as your model checkpoint is TensorFlow version. I'm wondering if it is able to transform to PyTorch version?

关于训练素材和配置请教大佬

大佬能否帮忙解答3个问题:

1.请问这个gpt2-medium的训练语料是什么呀
2.请问是什么显卡才能足够从头训练
3.训练了多久

提前感谢大佬

u=915485820,3681083199 fm=11 gp=0

执行python build_tokenizer.py时出现错误

大佬你好,我在按照requirements.txt配置环境后执行python build_tokenizer.py时出现了下列错误

(gpt2) PS C:\Users\qb\Desktop\gpt2-quickly-main> python build_tokenizer.py
'spm_train' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
'mv' 不是内部或外部命令,也不是可运行的程序
或批处理文件。

请问这种情况该如何解决呢?本人小白一枚,还请包涵指教😢

finetune的时候内存占用超过32G

大佬,我按照readme从上到下执行命令(只跳过了train.py ),使用cpu只为看看能不能跑通。但最后进行finetune的时候,直接超过我32G内存了。有办法减少些吗。。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.