GithubHelp home page GithubHelp logo

How to fine tune Codegen? about codegen HOT 18 OPEN

smith-co avatar smith-co commented on April 26, 2024 2
How to fine tune Codegen?

from codegen.

Comments (18)

enijkamp avatar enijkamp commented on April 26, 2024 19

The converted PyTorch models can be fine-tuned similarly to other causal LMs in HuggingFace.

See tutorials like http://reyfarhan.com/posts/easy-gpt2-finetuning-huggingface/.

from codegen.

enijkamp avatar enijkamp commented on April 26, 2024 8

@smith-co @thisisanshgupta @tlkh

For torch, I wrote up a minimal example in deepspeed, which can train the 16B on a ~24 GB gpu. You would need to sanity test this, optimize the configuration, plug in the data loader, and save the weights to disk:
https://github.com/salesforce/CodeGen/blob/main/jaxformer/hf/train_deepspeed.py

For jax, the training library in is undergoing sanity checks on TPU-v3 and should be released soon.

from codegen.

enijkamp avatar enijkamp commented on April 26, 2024 5

@thisisanshgupta @Ontopic Yes, I'm working on the release of my training library for TPU-v3/v4 and will keep you posted.

from codegen.

tlkh avatar tlkh commented on April 26, 2024 5

Hello @enijkamp thank you for your work. Looking forward to some fine-tuning instructions and code.

Currently, I have tried to fine-tune as if it is GPT-2, but I am running into issues where the model's quality degrades significantly.

Is there any particular way the data has to be structured for fine-tuning? Currently, I am just concatenating together the prompts and code as follows:

def xyz():
    """abc"""
    code()

def xyz():
    """abc"""
    code()

from codegen.

zhangybuaa avatar zhangybuaa commented on April 26, 2024 3

@enijkamp I want to fine-tune the model with my own code data, how should I build the dataset. Are there any requirements for the format of the dataset, whether the data needs to be labeled and what format should it be labeled in. Can some guidance or examples be given, thanks๏ผ

from codegen.

srnsrn120 avatar srnsrn120 commented on April 26, 2024 1

@enijkamp : I want to finetune mono model , Can you please share dataset format for python and details steps or notebook .

from codegen.

shmuelhizmi avatar shmuelhizmi commented on April 26, 2024

+1

from codegen.

TheodoreGalanos avatar TheodoreGalanos commented on April 26, 2024

Would you be releasing training code for the original models? Would be nice to try some on v3s (if possible).

from codegen.

thisisanshgupta avatar thisisanshgupta commented on April 26, 2024

I think this script might help in finetuning:

https://colab.research.google.com/drive/13dZVYEOMhXhkXWfvSMVM1TTtUDrT6Aeh?usp=sharing#scrollTo=vCPohrZ-CTWu

from codegen.

enijkamp avatar enijkamp commented on April 26, 2024

@TheodoreGalanos Working on a release for the JAX coding. I trained the models on TPU-v4 and have to resolve a blocker for v3.

from codegen.

smith-co avatar smith-co commented on April 26, 2024

@enijkamp @thisisanshgupta I am checking the link you have shared.

Still I think it would greatly help everyone if it is possible to provide fine tuning steps in the repo. ๐Ÿ™

from codegen.

Ontopic avatar Ontopic commented on April 26, 2024

I for one would appreciate any code/directions needed to run things on a TPU-v4. Great work all!

from codegen.

enijkamp avatar enijkamp commented on April 26, 2024

@smith-co @thisisanshgupta @tlkh @Ontopic @TheodoreGalanos @shmuelhizmi A first release of the training code for TPU-v3/v4 is here:

https://github.com/salesforce/jaxformer

from codegen.

calix avatar calix commented on April 26, 2024

@smith-co @thisisanshgupta @tlkh

For torch, I wrote up a minimal example in deepspeed, which can train the 16B on a ~24 GB gpu. You would need to sanity test this, optimize the configuration, plug in the data loader, and save the weights to disk: https://github.com/salesforce/CodeGen/blob/main/jaxformer/hf/train_deepspeed.py

For jax, the training library in is undergoing sanity checks on TPU-v3 and should be released soon.

Besides the VRAM, how much RAM would be required to train the model?

from codegen.

glicerico avatar glicerico commented on April 26, 2024

@enijkamp , or anyone who has used jaxformer to fine-tune on TPU-v4, what is the approximate cost?

from codegen.

enijkamp avatar enijkamp commented on April 26, 2024

@glicerico Roughly speaking, cost is a function of the size of the model and data. How much data do you have? Which model do you want to fine-tune?

from codegen.

glicerico avatar glicerico commented on April 26, 2024

@enijkamp , trying to reproduce the work by Shin and Van Durme, who used a few hundred (sentence, parse) pairs to fine tune codex for semantic parsing. I would like to do this with CodeGen. Seeing your results, I would probably want to fine tune the 16GB model.

from codegen.

watreyoung avatar watreyoung commented on April 26, 2024

@glicerico Roughly speaking, cost is a function of the size of the model and data. How much data do you have? Which model do you want to fine-tune?

Is there any more easier code script template withouth deep-speed to fine-tune CodeGen(350M)?
Plus: Is the data format same as other pre-trained model like CodeT5 or CodeBERT?
Looking forward to the reply.

from codegen.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.