GithubHelp home page GithubHelp logo

Comments (15)

stefan-it avatar stefan-it commented on June 22, 2024 5

Hi @008karan,

you can use SentencePiece, but afterwards you need to convert the SPM vocab to a BERT-compatible one. E.g. you could use the following script:

import sys

sp_vocab = sys.argv[1]

# Static
bert_vocab = ['[PAD]']
bert_vocab += [f'unused{i}' for i in range(0,100)]
bert_vocab += ['[UNK]', '[CLS]', '[SEP]', '[MASK]']

sp_special_symbols = ['<unk>', '<s>', '</s>']

with open(sp_vocab, 'rt') as f_p:
        bert_vocab += [("##" + line.split()[0]).replace('##▁', '') for line in f_p
                                   if line.split()[0] not in sp_special_symbols]

        print("\n".join(bert_vocab))

(Input file would be the SPM vocab file, output is a BERT-compatible vocab file).

However, I would highly recommend using the Hugging Face Tokenizers library for that.

Here are some code snippets for using the Tokenizers library in order to create a BERT-compatible vocab:

https://github.com/stefan-it/turkish-bert/blob/master/CHEATSHEET.md#cased-model

I used it for creating the vocab file for the Turkish BERT model.

from electra.

stefan-it avatar stefan-it commented on June 22, 2024 1

Ah, you should pass `--vocab-file='/content/drive/My Drive/Vocab_dir/vocab.txt' as argument (using the folder name only is not sufficient) :)

from electra.

008karan avatar 008karan commented on June 22, 2024

@stefan-it I changed the vocab to berts format from your provided code snippet. Thanks!

One strange thing I found that my training data had a size of around 22 Gb and after generating pre-training data has size of 13 Gb. Usually, it should be bigger than the original data size.

from electra.

ddofer avatar ddofer commented on June 22, 2024

Would it be possible to provide an example of how to run the vocab/tokenizing in advance on this data, including the expected output sentencepiece vocab?

from electra.

parmarsuraj99 avatar parmarsuraj99 commented on June 22, 2024

Huggingface's tokenizer library worked like a charm.

from tokenizers import BertWordPieceTokenizer
tokenizer = BertWordPieceTokenizer(handle_chinese_chars=False, strip_accents=False, lowercase=False)
tokenizer.train(files="/content/corpus_dir/gu.txt")
tokenizer.save("BertWordPieceTokenizer")

from electra.

008karan avatar 008karan commented on June 22, 2024

@parmarsuraj99 Are you training using HF library?

from electra.

parmarsuraj99 avatar parmarsuraj99 commented on June 22, 2024

Yes, to train tokenizer and then I use vocab.txt in build_pretraining_dataset.py. It works.

from electra.

008karan avatar 008karan commented on June 22, 2024

Is HF Electra pertaining method available? They were going to publish it.

from electra.

parmarsuraj99 avatar parmarsuraj99 commented on June 22, 2024

For ELECTRA?maybe not. I tried importing Electra form transformers. Maybe They are working on that. But you can still use Bert Tokenizer. It works well.

from electra.

Sagar1094 avatar Sagar1094 commented on June 22, 2024

I created the vocabulary file using the above mentioned link and still facing the issue.

Job 0: Creating example writer
Job 0: Writing tf examples
Traceback (most recent call last):
File "build_pretraining_dataset.py", line 230, in
main()
File "build_pretraining_dataset.py", line 218, in main
write_examples(0, args)
File "build_pretraining_dataset.py", line 190, in write_examples
example_writer.write_examples(os.path.join(args.corpus_dir, fname))
File "build_pretraining_dataset.py", line 143, in write_examples
example = self._example_builder.add_line(line)
File "build_pretraining_dataset.py", line 50, in add_line
bert_tokids = self._tokenizer.convert_tokens_to_ids(bert_tokens)
File "/content/drive/My Drive/electra-master/model/tokenization.py", line 130, in convert_tokens_to_ids
return convert_by_vocab(self.vocab, tokens)
File "/content/drive/My Drive/electra-master/model/tokenization.py", line 91, in convert_by_vocab
output.append(vocab[item])
KeyError: '[UNK]'

from electra.

stefan-it avatar stefan-it commented on June 22, 2024

@Sagar1094 could you check that the path to the vocab file is correct? I've seen this error message also in cases, where the pathname to the vocab file was not correct (when using the build_pretraining_dataset.py script.

from electra.

Sagar1094 avatar Sagar1094 commented on June 22, 2024

Hi, I am using Google colab for running the script build_pretraining_dataset.py and the vocab.txt file is placed in google drive. I have mounted the drive as well.

Here is the code snippet:-
!python build_pretraining_dataset.py
--corpus-dir='/content/drive/My Drive/Corpus_dir/'
--output-dir='/content/drive/My Drive/tf/'
--vocab-file='/content/drive/My Drive/Vocab_dir/'
--max-seq-length=128
--do-lower-case

And output of !ls '/content/drive/My Drive/Vocab_dir/'
vocab.txt

Also the output of !head -20 '/content/drive/My Drive/Vocab_dir/vocab.txt'

[PAD]
[UNK]
[CLS]
[SEP]
[MASK]
0
1
2
3
4
5
6
7
8
9
a
b
c
d
e

Let me know if I am missing out on something. Thanks

from electra.

Sagar1094 avatar Sagar1094 commented on June 22, 2024

Thanks a lot, I just missed to mention the file name. Silly :)

from electra.

IssaIssa1 avatar IssaIssa1 commented on June 22, 2024

Hi, I am using Google colab for running the script build_pretraining_dataset.py and the vocab.txt file is placed in google drive. I have mounted the drive as well.

Here is the code snippet:-
!python build_pretraining_dataset.py
--corpus-dir='/content/drive/My Drive/Corpus_dir/'
--output-dir='/content/drive/My Drive/tf/'
--vocab-file='/content/drive/My Drive/Vocab_dir/'
--max-seq-length=128
--do-lower-case

And output of !ls '/content/drive/My Drive/Vocab_dir/'
vocab.txt

Also the output of !head -20 '/content/drive/My Drive/Vocab_dir/vocab.txt'

[PAD]
[UNK]
[CLS]
[SEP]
[MASK]
0
1
2
3
4
5
6
7
8
9
a
b
c
d
e

Let me know if I am missing out on something. Thanks

May you please share the tokenizer training and saving script?

from electra.

elyorman avatar elyorman commented on June 22, 2024

(electra) ubuntu@nipa2020-0706:~/EL/electra/electra$ python3 build_openwebtext_pretraining_dataset.py --data-dir DATA_DIR --num-processes 5 Job 0: Creating example writer Process Process-1: Job 2: Creating example writer Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte Job 3: Creating example writer Job 1: Creating example writer Job 4: Creating example writer Process Process-3: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte Process Process-4: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte Process Process-2: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte Process Process-5: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte

I am having this error while running build_openwebtext_pretraining_dataset.py --data-dir DATA_DIR --num-processes 5
Can anyone help with this please?

from electra.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.