Comments (6)
仔细看了一下,和23.04版本的megatron里的 forward_backward_pipelining_without_interleaving那几个函数很多参数都对不上啊。。。
报错不是发生在这里,由于您其他地方的报错,导致进入try catch中的catch部分才引起这个错误的
from pai-megatron-patch.
仔细看了一下,和23.04版本的megatron里的 forward_backward_pipelining_without_interleaving那几个函数很多参数都对不上啊。。。
from pai-megatron-patch.
您好,我用Megatron-LM-23.04和0.5.1版本的Patch是可以跑通的啊,我用的命令是:
sh run_finetune_megatron_llama.sh dsw /workspace/Megatron-LM-23.04/ /workspace/github/Pai-Megatron-Patch-0.5.1/ 7B 1 1e-5 1e-6 2048 2049 0 fp16 2 1 sel true false false /mnt/llama2-datasets/wudao_train.json /mnt/llama2-datasets/wudao_valid.json /mnt/llama2-ckpts/Llama-2-7b-hf-to-mg-tp2-pp1 2 /mnt/output_llama2
from pai-megatron-patch.
您第一步有执行ckpt convert吗?
cd /workspace/github/Pai-Megatron-Patch-0.5.1/toolkits/model_checkpoints_convertor/llama
sh model_convertor.sh /workspace/Megatron-LM-23.04 /mnt/llama2-ckpts/Llama-2-7b-hf /mnt/llama2-ckpts/Llama-2-7b-hf-to-mg-tp2-pp1 2 1 llama2-7b 0 false
另外您至少需要两张卡
from pai-megatron-patch.
您好,我用Megatron-LM-23.04和0.5.1版本的Patch是可以跑通的啊,我用的命令是: sh run_finetune_megatron_llama.sh dsw /workspace/Megatron-LM-23.04/ /workspace/github/Pai-Megatron-Patch-0.5.1/ 7B 1 1e-5 1e-6 2048 2049 0 fp16 2 1 sel true false false /mnt/llama2-datasets/wudao_train.json /mnt/llama2-datasets/wudao_valid.json /mnt/llama2-ckpts/Llama-2-7b-hf-to-mg-tp2-pp1 2 /mnt/output_llama2
感谢,我明天跑一下
from pai-megatron-patch.
您好,我用Megatron-LM-23.04和0.5.1版本的Patch是可以跑通的啊,我用的命令是: sh run_finetune_megatron_llama.sh dsw /workspace/Megatron-LM-23.04/ /workspace/github/Pai-Megatron-Patch-0.5.1/ 7B 1 1e-5 1e-6 2048 2049 0 fp16 2 1 sel true false false /mnt/llama2-datasets/wudao_train.json /mnt/llama2-datasets/wudao_valid.json /mnt/llama2-ckpts/Llama-2-7b-hf-to-mg-tp2-pp1 2 /mnt/output_llama2
感谢,我明天跑一下
谢谢,有问题可以进钉钉群直接找我
from pai-megatron-patch.
Related Issues (20)
- 我可以用最新的 Megatron 和 Grouped GEMM 训练一个 MOE 模型吗?我尝试了你转换的检查点,但它没有工作,我应该做些不同的事情吗? HOT 2
- Megatron-Core-MoE HOT 3
- loss increased when tp > 1 for qwen1_5 continue pretrain HOT 4
- llama3的modeling使用的是qwen1.5 HOT 2
- llama3 HOT 1
- llama3增量预训练时张量维度不匹配 HOT 1
- Llava不支持训到中途失败后基于已保存的checkpoint再次续训的逻辑嘛
- 多机llama3 pt训练报错 HOT 7
- 双节点A800-40G训练Qwen1.5-72B模型出错。
- spelling mistake in example ReadMe HOT 1
- MegaBlocks训练
- 使用最新的Megatron代码进行Llama 3检查点转换 HOT 1
- qwen moe模型训练脚本的参数是不是不对?能提供正确的训练脚本吗 HOT 1
- Support transformer-engine version after 0.9.0 HOT 1
- Qwen-32b现在适配了吗 HOT 1
- llama3-8b 初始loss偏高 HOT 3
- fintune模型 HOT 1
- [QUESTION]将huggingface 格式的权重转换为megatron格式,对模型评估的准确率是否有影响? HOT 1
- 日志中输出的参数总量是不是有问题呢 HOT 1
- 关于二阶段训练的问题 HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pai-megatron-patch.