GithubHelp home page GithubHelp logo

allanj / lomo_for_math Goto Github PK

View Code? Open in Web Editor NEW

This project forked from openlmlab/lomo

0.0 0.0 0.0 1.58 MB

LOMO: LOw-Memory Optimization for math word problem

License: MIT License

Shell 0.24% Python 99.76%

lomo_for_math's Introduction

English | 中文

LOMO: LOw-Memory Optimization

This is the implementation for Full Parameter Fine-Tuning for Large Language Models with Limited Resources.

In this work, we propose a new optimizer, LOw-Memory Optimization (LOMO), which fuses the gradient computation and the parameter update in one step to reduce memory usage. Our approach enables the full parameter fine-tuning of a 7B model on a single RTX 3090, or a 65B model on a single machine with 8×RTX 3090, each with 24GB memory.

LOMO is integrated with CoLLiE library, which supports Collaborative Tuning of Large Language Models in an Efficient Way.

LOMO

Dependencies

torch
deepspeed
transformers
peft
wandb

The minimum dependency is PyTorch, and others are used to reproduce our paper results.

Run the code

We provide code for fine-tuning Large Language Models (LLMs) using three different approaches: LOMO, LoRA, and LoRA + LOMO.

  1. For full parameter fine-tuning using LOMO, the implementation is in src/lomo_trainer.py, and you can run:
deepspeed --master_port "$port" --include localhost:"$CUDA_VISIBLE_DEVICES" src/train_lomo.py config/args_lomo.yaml
  1. For LoRA and LoRA + LOMO, the implementation is in src/lomo_lora_trainer.py, and you can run:
deepspeed --master_port "$port" --include localhost:"$CUDA_VISIBLE_DEVICES" src/train_lomo_lora.py config/args_lomo_lora.yaml

In the code, we have included the lora_only argument in src/arguments.py, which controls whether to use LoRA only or LoRA + LOMO. Please note that when lora_only is set to True, the arguments related to LOMO will not work.

Besides, we provide a simple run.sh script for convenience. You can execute the code using the following command:

bash run.sh

For data processing, we currently only provide the six datasets of SuperGLUE mentioned in the paper. If you wish to use new datasets, please modify the Dataset and DataCollator accordingly.

For evaluation, we currently only provide the eval_step codes for multiple-choice QA and generation tasks. If you have other requirements, please modify the eval_step code in the LOMOTrainer or LOMOLoRATrainer accordingly and provide the necessary compute_metrics to the trainer.

Reproduce our results

We provide the sampled datasets used in our experiments here. Due to the limited computational resources, we reported the highest results obtained from experiments conducted with the same random seed (42). We acknolwedge this limitation in our work and plan to conduct repeated experiments in the next version to address it.

Feel free to raise issues if you have any questions.

Implementation

Hook function Our implementation relies on injecting hook functions into PyTorch's backward pass. As depicted in the figure, we register a customized hook function for each parameter. When the gradient of a parameter is computed (prior to writing it to the .grad attribute), its corresponding hook function is invoked. For more information about hook functions and the backward pass of the autograd graph, please refer to PyTorch's documentation. In summary, during the backward pass, we go through a tensor and its grad_fn, write the gradient into the .grad attribute, and then pass to the next tensor.

Our customized hook function scans all the parameters, updating a parameter if its .grad attribute is not empty, and then clears and frees the .grad attribute. Since the hook function for a parameter is called before its .grad attribute is set, the .grad attribute of the last parameter in the autograd graph is not ready when the last hook function is invoked. Therefore, we perform an additional scan to update the last parameter.

Citation

@inproceedings{Lv2023FullPF,
  title={Full Parameter Fine-tuning for Large Language Models with Limited Resources},
  author={Kai Lv and Yuqing Yang and Tengxiao Liu and Qi-jie Gao and Qipeng Guo and Xipeng Qiu},
  year={2023}
}

lomo_for_math's People

Contributors

kailv69 avatar qipengguo avatar ayyyq avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.