GithubHelp home page GithubHelp logo

smallaitt / alpaca-cot Goto Github PK

View Code? Open in Web Editor NEW

This project forked from phoebussi/alpaca-cot

0.0 0.0 0.0 121.05 MB

We extend CoT data to Alpaca to boost its reasoning ability. We are constantly expanding our collection of instruction-tuning data. The instruction collection can be found at https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main (我们将CoT数据扩展到Alpaca以提高其推理能力,同时我们将不断收集更多的instruction-tuning数据集。)

License: Apache License 2.0

Python 28.15% Jupyter Notebook 71.85%

alpaca-cot's Introduction

Alpaca-CoT

Evolving Alpaca: An Empirical Study on Instruction Tuning for Large Language Models (Alpaca-CoT)

中文README,请看这里。(Chinese READEME can be found here.)

This is the repository for the Evolving Alpaca project, which aims to extensively collect instruction-tuning datasets (especially the CoT datasets) and conduct an in-depth empirical study based on LLaMA model [1]. Evolving is used to describe the continuous expansion of our instruction-tuning data collection, which will continuously enhance Alpaca's [2] instruction-following capabilities.

You are in a warm welcome to provide us with any non-collected instruction-tuning datasets (or their sources). We will uniformly format them, train Alpaca model (and other LLMs in the early future) with these datasets, open source the model checkpoints, and conduct extensive empirical studies. We hope that our project can make a modest contribution to the open-source process of large language models, and reduce its threshold for NLP researchers to get started.

News

  • 3.25: To facilitate downloading, all model(LoRA) weights have been uploaded here.
  • 3.25: Chinese instruction dataset(1M, Does not contain the original 0.5M ones) published by BELLE has been formatted and collected here. (The model will be released later.)

Overview

LLaMA [1] is a great work that demonstrates the amazing zero-shot and few-shot ability. It significantly reduces the cost of training, finetuning, and using competitive large language models, i.e., LLaMA-13B outperforms GPT-3(175B) and LLaMA-65B is competitive to PaLM-540M. Recently, to boost the instruction-following ability of LLaMA, Stanford Alpaca [2] finetuned LLaMA-7B on 52K instruction-following data generated by the Self-Instruct [3] techniques. However, at present, the LLM research community still faces two challenges: 1. Even LLaMA still has high requirements for computing resources, and 2. There are not many open source datasets for instruction finetuning.

To this end, we propose this project, which leverages various improvements that were subsequently proposed, with the following advantages:

  • This repo contains code, modified from here, which can finetune LLaMA cheaply and efficiently (without performance degradation compared to Stanford Alpaca) by using low-rank adaptation (LoRA) [4], PEFT and bitsandbytes. The 7b, 13b and 30b versions of LLaMA models can be easily trained on a single 80G A100.
  • The models published in this repo significantly improve the CoT (reasoning) capability, using CoT datasets published by FLAN [5].
  • The models published in this repo significantly improve the ability to follow Chinese instructions, with the help of Chinese instruction datasets published by BELLE [6].
  • This repo contains a collection of instruction-finetuning datasets that are continuously collected, which so far includes English, Chinese and CoT instructions. In addition, a collection of checkpoints trained with various instruction datasets is also provided.
  • This repo contains extensive empirical studies and qualitative analysis, which may provide valuable findings and promote the exploration of LLM in the future.

To the best of our knowledge, this work is the first to study CoT reasoning based on LLaMA and Alpaca. Therefore, we abbreviate our work to Alpaca-CoT.

[1]: LLaMA: Open and Efficient Foundation Language Models

[2]: Stanford Alpaca: An Instruction-following LLaMA model

[3]: Self-Instruct: Aligning Language Model with Self Generated Instructions

[4]: LoRA: Low-Rank Adaptation of Large Language Models

[5]: FLAN: Scaling Instruction-Finetuned Language Models

[6]: BELLE: Bloom-Enhanced Large Language model Engine

Data Collection

Statistics

data collection statistics The current collection of instruction-finetuning datasets consists mainly of three parts:

  • alpaca_data_cleaned.json: about 52K English instruction-following training samples.
  • belle_data_cn.json: about 0.5M Chinese |instruction-following training samples.
  • CoT_data.json: 9 CoT datasets involving about 75k samples.

More details on the usage and sources of different datasets can be found here.

Data Download

You can download all the formatted data here. Then you should put them in the data folder.

Data Fomatting

All data in our collection is formatted into the same templates, where each sample is as follows:

[
{"instruction": instruction string,
"input": input string, # (may be empty)
"output": output string}
]

Note that, for CoT datasets, we first use the template provided by FLAN to change the original dataset into various Chain-of-Thoughts forms, and then convert it to the above format. The formatting script can be found here.

Instruction Finetuning

Setup

pip install -r requirements.txt

Instruction Tuning

Single GPU

## --data
# alpaca-cot: reasoning-enhanced version
# alpaca-belle: Chinese-enhanced version
# alpaca-belle-cot: full-data version 
## --size
# [7, 13, 30, 65]


python3 finetune.py --size 7 --data alpaca-belle-cot

Multiple GPUs

## --data
# alpaca-cot: reasoning-enhanced version
# alpaca-belle: Chinese-enhanced version
# alpaca-belle-cot: full-data version 
## --size
# [7, 13, 30, 65]

python3 -m torch.distributed.launch --nproc_per_node 4  \
    --nnodes=1 --node_rank=0 --master_addr=xxx --master_port=yyy finetune.py  --size 7 --data alpaca-belle-cot

Inference

## --data
# alpaca-cot: reasoning-enhanced version
# alpaca-belle: Chinese-enhanced version
# alpaca-belle-cot: full-data version 
## --size
# [7, 13, 30, 65]

python3 generate.py --size 7 --data alpaca-belle-cot

More details of instruction finetuing and inference can be found here where we modified from. Note that the folders saved-xxx7b are the save path for LoRA weights, and LLaMA weights are automatically downloaded from Hugging Face.

Quantitative Analysis

Ablation of CoT and Chinese Instructions

ablation-cot "w/o CoT" and "w/o CN" denote models that exclude CoT data and Chinese instructions from their instruction finetuning data, respectively.

The above table shows two examples (invoving with numerical calculations) that require a certain amount of reasoning ability to respond correctly. As shown in the middle column, Ours w/o CoT fails to generate the correct response, which shows that once the finetuning data does not contain CoT data, the model's reasoning ability significantly decreases. This further demonstrates that CoT data is essential for LLM models.

ablation-cot

The above table shows two examples that require the ability to respond to Chinese instructions. As shown in the right column, either the generated content of Ours w/o CN is unreasonable, or the Chinese instructions are answered in English by Ours w/o CN. This shows that removing Chinese data during finetuning will cause the model to be unable to handle Chinese instructions, and further demonstrates the need to collect Chinese instruction finetuning data.

ablation-cot

The above table shows a relatively difficult example, which requires both a certain accumulation of knowledge of Chinese history and a logical and complete ability to state historical events. As shown in this table, Ours w/o CN can only generate a short and erroneous response, because due to the lack of Chinese finetuning data, the corresponding knowledge of Chinese history is naturally lacking. Although Ours w/o CoT lists some relevant Chinese historical events, its logic of expression is self-contradictory, which is caused by the lack of CoT data. `

In summary, the models finetuned from our complete dataset (English, Chinese, and CoT instruction data) can significantly improve model reasoning and Chinese instruction following abilities.

The Effect of CoT Data

CoT-comparison Samples of each odd number of rows do not apply the CoT prompt, such as "step-by-step reasoning." Both Ours(w/CoT) and Alpaca are based on LLaMA-7B, and the only difference between them two is that the instruction-finetuning data of Ours(w/CoT) has a extra CoT data than that of Alpaca.

From the above table, we find that:

  • Ours(w/CoT) always generates the correct rationale before the answer, while Alpaca fails to generate any reasonable rationale, as shown in the first 4 examples (commonsense questions). This shows that using CoT data for finetuning can significantly improve reasoning ability.
  • For Ours(w/CoT), the CoT prompt (e.g., concatenate 'step-by-step' with the input question) has little effect on easy examples (e.g., commonsense questions) and has an important effect on challenging questions (e.g., questions requiring reasoning, like the last four examples).
  • For Alpaca, CoT prompt always has little effect or even negative impact. For the last two examples, after adding CoT prompt, Aplpaca changes the correct generated answer to the wrong one. This may be due to the inconsistency between the input forms of finetuning and inference.

The Effect of Chinese Instruction Data

Quantitative comparison of responses to Chinese instructions. CN_compare_CN

Our model is finetuned from a 7B LLaMA on 52K English instructions and 0.5M Chinese instructions. Stanford Alpaca (our reimplementation) is finetuned from a 7B LLaMA on 52K English instructions. BELLE is finetuned from a 7B BLOOM on 2B Chinese instructions.

From the above table, several observations can be found:

  • Compared to Alpaca, ours (w/ CN) has a stronger ability to understand Chinese instructions. For the first example, Alpaca fails to distinguish between the instruction part and input part, while we do.
  • Chinese instruction finetuning data can significant enhance the ability to interact in Chinese. For the second example, ours (w/ CN) not only provides the correct code, but also provides the corresponding Chinese annotation, while Alpaca does not. In addition, as shown in the 3-5 examples, Alpaca can only respond to Chinese instruction with an English response.
  • Compared to BELLE, ours (w/ CN)'s performance on instructions requiring an open response (as shown in last two examples) still needs to be improved. BELLE's outstanding performance against such instructions is due to: 1. Its BLOOM backbone model encounters much more multilingual data during pre-training; 2. Its Chinese instruction finetuning data is more than ours, that is, 2M vs 0.5M.

Quantitative comparison of responses to English instructions. The purpose of this subsection is to explore whether finetuning on Chinese instructions has a negative impact on Alpaca. CN_compare_EN

From the above table, we find that:

  • Finetuning with Chinese instruction data does not weaken the original English instruction–following ability, on the contrary, there is also a certain enhancement in genearting a better response to English intructions. The response of ours (w/ CN) shows more detail than that of Alpaca, e.g. for the third example, ours (w/ CN) list three more provinces than Alpaca.

Future Work

  • Exploration of few-shot ability.
  • Ablation study of various sizes of models.
  • Evaluate on instruction-following evaluation suite.
  • Collect more instruction finetuning datasets.

Citation

Please cite the repo if you use the data collection, code, and experimental findings in this repo.

@misc{alpaca-cot,
  author = {Qingyi Si, Zheng Lin },
  school = {Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China},
  title = {Evolving Alpaca: An Empirical Study on Instruction Tuning for Large Language Models},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/PhoebusSi/alpaca-CoT}},
}

For data, please cite the original Stanford Alpaca, BELLE and FLAN papers as well.

For models, please cite the original LLaMA, Stanford Alpaca, Self-Instruct and LoRA papers as well.

alpaca-cot's People

Contributors

phoebussi avatar zsc avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.