Comments (5)
initializing bart tokenizer...
creating lightseq model...
Parsing hdf5: /home/sysadmin/downlaod/lightseq_models/lightseq_mbart_base.hdf5
loading 976 MB of embedding weight.
Finish loading src_emb_wei from host to device
loading 1073 MB of embedding weight.
Finish loading trg_emb_wei from host to device
loading 576 MB of encoder weight.
Finish loading enc_wei from host to device
loading 672 MB of decoder weight.
Finish loading dec_wei from host to device
Finish loading all weight from host to device
model config
encoder layers: 12
decoder layers: 12
hidden size: 1024
inner size: 4096
head number: 12
dim per head: 85
src vocab size: 250031
trg vocab size: 250031
is_post_ln: 1
no_scale_embedding: 1
use_gelu: 1
start_id: 2
end_id: 2
padding_id: 1
is_multilingual: 0
generator config
beam size: 4
extra decode length(max decode length - src input length): 50
length penalty: 1
diverse lambda: 0
sampling method: beam_search
topk: 1
topp: 0.75
Traceback (most recent call last):
File "ls_bart.py", line 102, in
main()
File "ls_bart.py", line 69, in main
ls_model = lsi.Transformer("/home/sysadmin/downlaod/lightseq_models/lightseq_mbart_base.hdf5", 128)
RuntimeError: violate dim_per_head % 2 = 0
Thank you for your new version. I am trying to accelerate the huggingface Mbart and successfully got the h5 file then. But when I run the "python ls_bart.py", I got this issue. Could you please tell me how to solve it?
from lightseq.
Currently, it looks that the tool do not support models exceeding 2GB
from lightseq.
you can check here #63
from lightseq.
@Taka152 Hello, thank you for your reply. I am trying to accelerate the MBart Model. But the model size is too large. Could the main branch solve the issue as I noticed some comments about the large models in the main branch.
I change the number of encoder/decoder of Mbart model to 2 in the config.json file. But the error still exists (Bytesize exceed 2GB). This is almost impossible for a 2 encoder / 2 - decoder mbart model. Do you know why? Thank you !
from lightseq.
initializing bart tokenizer... creating lightseq model... Parsing hdf5: /home/sysadmin/downlaod/lightseq_models/lightseq_mbart_base.hdf5 loading 976 MB of embedding weight. Finish loading src_emb_wei from host to device loading 1073 MB of embedding weight. Finish loading trg_emb_wei from host to device loading 576 MB of encoder weight. Finish loading enc_wei from host to device loading 672 MB of decoder weight. Finish loading dec_wei from host to device Finish loading all weight from host to device model config encoder layers: 12 decoder layers: 12 hidden size: 1024 inner size: 4096 head number: 12 dim per head: 85 src vocab size: 250031 trg vocab size: 250031 is_post_ln: 1 no_scale_embedding: 1 use_gelu: 1 start_id: 2 end_id: 2 padding_id: 1 is_multilingual: 0
generator config beam size: 4 extra decode length(max decode length - src input length): 50 length penalty: 1 diverse lambda: 0 sampling method: beam_search topk: 1 topp: 0.75 Traceback (most recent call last): File "ls_bart.py", line 102, in main() File "ls_bart.py", line 69, in main ls_model = lsi.Transformer("/home/sysadmin/downlaod/lightseq_models/lightseq_mbart_base.hdf5", 128) RuntimeError: violate dim_per_head % 2 = 0
Thank you for your new version. I am trying to accelerate the huggingface Mbart and successfully got the h5 file then. But when I run the "python ls_bart.py", I got this issue. Could you please tell me how to solve it?
I have the same issue
from lightseq.
Related Issues (20)
- How to get output scores for each output tokens of LightSeq BART model when inference
- Do you consider supporting the chatglm model?
- ls_torch_hf_quant_gpt2_export.py的使用问题
- lightseq' Transformer expects an extra layer_norm on both encoder and decoder level
- LLaMA example 结果验证
- 请问lightseq在推理流程中有gemm调参这一步吗?
- Is llama inference available now? HOT 1
- Do you have plans to support token_type_ids?
- llama inference test HOT 3
- 请问lightseq可以支持segmentAnyting的推理优化吗 HOT 1
- 为什么连给的example也有bug? HOT 4
- [Question] gptj, mpt support.
- question about environment
- how to resolve xlm-roberta convert fail
- Can int8 in pre-training large model ???
- lightseq是否支持clip模型的int8量化?
- Is it normal that A10 inference speed is lower than 2080ti? HOT 1
- identifier "__hisnan" is undefined HOT 4
- 要求C++ 17 HOT 1
- Exception: Installed CUDA version 12.3 does not match the version torch was compiled with 12.1, unable to compile cuda/cpp extensions without a matching cuda version.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lightseq.