Comments (2)
Sure, I can clarify briefly. A "processed corpus" is a smaller corpus (typically not the training corpus) that can be fully tokenized and fed through a trained model (say GPT as you mentioned). The corpus is then fed sentence by sentence into GPT for inference, and we save a bunch of hidden states and information about each sentence. These pieces of information are:
- The attention matrix for each input sentence at each head of each layer
- The embedding of each token after each layer
- The "context" (that is, the representation of each token from the perspective of each head, before the linear projection that will turn all the head information into the embedding for the next layer)
- Linguistic metadata about each token in the model (e.g., Part of Speech, dependency... a bunch of metadata that Spacy provides)
As you can imagine, the HDF5 files that hold all this information can grow quite large in size for larger corpora and models. There is a README here that describes the code that runs to do this task.
Your assumption (pt 2) is correct: since we are not training the model, we don't need to force any kind of task on GPT, and we do not want to use any token predicted by GPT. We do, however, keep the attention mask for every token such that the embeddings for these autoregressive models can only be created from information in preceding word tokens.
from exbert.
Thank you a lot for the explanation! It makes sense =)
from exbert.
Related Issues (20)
- corpus explorer unavailable? HOT 1
- Inability to launch responsive server possibly due to package version issues HOT 5
- How to avoid splitting [SEP] HOT 4
- Makefile doesn't work for spacy HOT 1
- Environment not able to recognize `from_pretrained` from `transformer_details` HOT 3
- loading weights with default model HOT 2
- Corpus View 500 error on Live Demo HOT 2
- Running locally with custom models HOT 4
- Server got error after open on browser HOT 1
- Problem running locally
- Not able to install exbert using node version 19 HOT 3
- Not able to use create_corpus.py or create_hdf5.py to create a .hdf5 file
- demo website is not accessible
- provide data files? HOT 5
- spaCy - BPE Alignment sometimes faulty, raises index errors HOT 2
- Attention block doesn't load when running locally HOT 2
- Compatibility with transformers trained on non-language sequential data HOT 2
- i got the error while i process the text?
- while i am doing the data processing?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from exbert.