GithubHelp home page GithubHelp logo

Comments (7)

shibing624 avatar shibing624 commented on July 27, 2024

设置peft_path 就可以恢复训练。

from medicalgpt.

charryshi avatar charryshi commented on July 27, 2024

感谢,我测试了加了peft_path这个参数还是有问题,报错
File "/home/xxx/miniconda3/lib/python3.8/site-packages/transformers/trainer.py", line 2128, in _load_from_checkpoint
raise ValueError(f"Can't find a valid checkpoint at {resume_from_checkpoint}")
ValueError: Can't find a valid checkpoint at outputs-pt-v1/checkpoint-8000
已经配置了
--peft_path ~/MedicalGPT/scripts/outputs-pt-v1/checkpoint-8000,这个目录下文件如下
adapter_config.json
adapter_model.bin
optimizer.pt
rng_state_0.pth
rng_state_1.pth
rng_state_2.pth
rng_state_3.pth
scaler.pt
scheduler.pt
trainer_state.json
training_args.bin

from medicalgpt.

shibing624 avatar shibing624 commented on July 27, 2024

加了peft_path,就可以把resume_from_checkpoint的逻辑注释掉,我一会儿改下。

from medicalgpt.

shibing624 avatar shibing624 commented on July 27, 2024

lora的恢复训练用参数peft_path,全参的恢复训练用resume_from_checkpoint

from medicalgpt.

yangcm1986 avatar yangcm1986 commented on July 27, 2024

lora的恢复训练用参数peft_path,全参的恢复训练用resume_from_checkpoint

使用peft_path 后,日志如下:
Peft from pre-trained model: /root/autodl-tmp/finetune-sft/outputs-sft-v2/checkpoint-32500

{'loss': 2.1882, 'learning_rate': 1.1111111111111112e-08, 'epoch': 0.0}
{'loss': 0.9015, 'learning_rate': 1.1111111111111112e-07, 'epoch': 0.0}
{'loss': 1.0434, 'learning_rate': 2.2222222222222224e-07, 'epoch': 0.0}
0%| | 22/36000 [02:20<63:07:57, 6.32s/it]

这个看上去还是重新开始呢

from medicalgpt.

shibing624 avatar shibing624 commented on July 27, 2024

是继续训练,看loss可知。

from medicalgpt.

SelenoChannel avatar SelenoChannel commented on July 27, 2024

lora的恢复训练用参数peft_path,全参的恢复训练用resume_from_checkpoint

使用peft_path 后,日志如下: Peft from pre-trained model: /root/autodl-tmp/finetune-sft/outputs-sft-v2/checkpoint-32500

{'loss': 2.1882, 'learning_rate': 1.1111111111111112e-08, 'epoch': 0.0} {'loss': 0.9015, 'learning_rate': 1.1111111111111112e-07, 'epoch': 0.0} {'loss': 1.0434, 'learning_rate': 2.2222222222222224e-07, 'epoch': 0.0} 0%| | 22/36000 [02:20<63:07:57, 6.32s/it]

这个看上去还是重新开始呢

you can pass resume_from_checkpoint=True to the trainer to skip previous steps. See huggingface/transformers#24274

from medicalgpt.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.