Comments (15)
Hi @008karan,
you can use SentencePiece, but afterwards you need to convert the SPM vocab to a BERT-compatible one. E.g. you could use the following script:
import sys
sp_vocab = sys.argv[1]
# Static
bert_vocab = ['[PAD]']
bert_vocab += [f'unused{i}' for i in range(0,100)]
bert_vocab += ['[UNK]', '[CLS]', '[SEP]', '[MASK]']
sp_special_symbols = ['<unk>', '<s>', '</s>']
with open(sp_vocab, 'rt') as f_p:
bert_vocab += [("##" + line.split()[0]).replace('##▁', '') for line in f_p
if line.split()[0] not in sp_special_symbols]
print("\n".join(bert_vocab))
(Input file would be the SPM vocab file, output is a BERT-compatible vocab file).
However, I would highly recommend using the Hugging Face Tokenizers library for that.
Here are some code snippets for using the Tokenizers library in order to create a BERT-compatible vocab:
https://github.com/stefan-it/turkish-bert/blob/master/CHEATSHEET.md#cased-model
I used it for creating the vocab file for the Turkish BERT model.
from electra.
Ah, you should pass `--vocab-file='/content/drive/My Drive/Vocab_dir/vocab.txt' as argument (using the folder name only is not sufficient) :)
from electra.
@stefan-it I changed the vocab to berts format from your provided code snippet. Thanks!
One strange thing I found that my training data had a size of around 22 Gb and after generating pre-training data has size of 13 Gb. Usually, it should be bigger than the original data size.
from electra.
Would it be possible to provide an example of how to run the vocab/tokenizing in advance on this data, including the expected output sentencepiece vocab?
from electra.
Huggingface's tokenizer library worked like a charm.
from tokenizers import BertWordPieceTokenizer
tokenizer = BertWordPieceTokenizer(handle_chinese_chars=False, strip_accents=False, lowercase=False)
tokenizer.train(files="/content/corpus_dir/gu.txt")
tokenizer.save("BertWordPieceTokenizer")
from electra.
@parmarsuraj99 Are you training using HF library?
from electra.
Yes, to train tokenizer and then I use vocab.txt in build_pretraining_dataset.py. It works.
from electra.
Is HF Electra pertaining method available? They were going to publish it.
from electra.
For ELECTRA?maybe not. I tried importing Electra form transformers. Maybe They are working on that. But you can still use Bert Tokenizer. It works well.
from electra.
I created the vocabulary file using the above mentioned link and still facing the issue.
Job 0: Creating example writer
Job 0: Writing tf examples
Traceback (most recent call last):
File "build_pretraining_dataset.py", line 230, in
main()
File "build_pretraining_dataset.py", line 218, in main
write_examples(0, args)
File "build_pretraining_dataset.py", line 190, in write_examples
example_writer.write_examples(os.path.join(args.corpus_dir, fname))
File "build_pretraining_dataset.py", line 143, in write_examples
example = self._example_builder.add_line(line)
File "build_pretraining_dataset.py", line 50, in add_line
bert_tokids = self._tokenizer.convert_tokens_to_ids(bert_tokens)
File "/content/drive/My Drive/electra-master/model/tokenization.py", line 130, in convert_tokens_to_ids
return convert_by_vocab(self.vocab, tokens)
File "/content/drive/My Drive/electra-master/model/tokenization.py", line 91, in convert_by_vocab
output.append(vocab[item])
KeyError: '[UNK]'
from electra.
@Sagar1094 could you check that the path to the vocab file is correct? I've seen this error message also in cases, where the pathname to the vocab file was not correct (when using the build_pretraining_dataset.py
script.
from electra.
Hi, I am using Google colab for running the script build_pretraining_dataset.py and the vocab.txt file is placed in google drive. I have mounted the drive as well.
Here is the code snippet:-
!python build_pretraining_dataset.py
--corpus-dir='/content/drive/My Drive/Corpus_dir/'
--output-dir='/content/drive/My Drive/tf/'
--vocab-file='/content/drive/My Drive/Vocab_dir/'
--max-seq-length=128
--do-lower-case
And output of !ls '/content/drive/My Drive/Vocab_dir/'
vocab.txt
Also the output of !head -20 '/content/drive/My Drive/Vocab_dir/vocab.txt'
[PAD]
[UNK]
[CLS]
[SEP]
[MASK]
0
1
2
3
4
5
6
7
8
9
a
b
c
d
e
Let me know if I am missing out on something. Thanks
from electra.
Thanks a lot, I just missed to mention the file name. Silly :)
from electra.
Hi, I am using Google colab for running the script build_pretraining_dataset.py and the vocab.txt file is placed in google drive. I have mounted the drive as well.
Here is the code snippet:-
!python build_pretraining_dataset.py
--corpus-dir='/content/drive/My Drive/Corpus_dir/'
--output-dir='/content/drive/My Drive/tf/'
--vocab-file='/content/drive/My Drive/Vocab_dir/'
--max-seq-length=128
--do-lower-caseAnd output of !ls '/content/drive/My Drive/Vocab_dir/'
vocab.txtAlso the output of !head -20 '/content/drive/My Drive/Vocab_dir/vocab.txt'
[PAD]
[UNK]
[CLS]
[SEP]
[MASK]
0
1
2
3
4
5
6
7
8
9
a
b
c
d
eLet me know if I am missing out on something. Thanks
May you please share the tokenizer training and saving script?
from electra.
(electra) ubuntu@nipa2020-0706:~/EL/electra/electra$ python3 build_openwebtext_pretraining_dataset.py --data-dir DATA_DIR --num-processes 5 Job 0: Creating example writer Process Process-1: Job 2: Creating example writer Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte Job 3: Creating example writer Job 1: Creating example writer Job 4: Creating example writer Process Process-3: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte Process Process-4: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte Process Process-2: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte Process Process-5: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte
I am having this error while running build_openwebtext_pretraining_dataset.py --data-dir DATA_DIR --num-processes 5
Can anyone help with this please?
from electra.
Related Issues (20)
- what should i do to extract the electra discriminator HOT 2
- ELECTRA-base fine tuned on MNLI HOT 1
- A possible mistake in the FLOPs calculation of attn_output_layer_norm in the file flops_computation.py
- no module named tensorflow.contrib HOT 1
- Electra Vocabulary HOT 1
- some confusions about paper HOT 1
- Question regarding TrainingsData/Validation Data split
- NumPy Import Error HOT 2
- Train electra with another tokenizer HOT 2
- About the Electra paper
- Optimal Learning Rate and Training Steps for Large Batch Size
- How can I draw this? HOT 1
- Tagging Task Segment ids
- Cannot import trace from tensorflow.python.profiler HOT 4
- sequence tagging tasks fails at metric reporting HOT 1
- Can I used run_mlm.py in transformer for fine-tuning generator(mlm) of electra
- failed to run cuBLAS routine: CUBLAS_STATUS_EXECUTION_FAILED HOT 1
- finetune preprocessing adding padding to the dataset error HOT 1
- How many parameters do discriminator and generator have?
- What is the maximum acceptance for the sentence length for the ELECTRA model?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from electra.