GithubHelp home page GithubHelp logo

gptchain's Introduction

gptchain

slack badge

This project has evolved:

...from a command line application to run Large Language Models (such as OpenAI and Llama) on custom data, built for the educational YouTube video

...to a framework with LLM finetuning and deployment capabilities. It supports a few datasets for finetuning out of the box, others can be added easily.

The framework utilises LangChain, Unsloth and TRL.

How to use

Clone this repo, then

cd gptchain
pip install -r requirements-train.txt

LLM inference

Using OpenAI-like JSON string with automatic ChatML conversion:

python gptchain.py chat -m ruslandev/llama-3-70b-tagengo \
	--chatml true \
	-q '[{"from": "human", "value": "Из чего состоит нейронная сеть?"}]'

LLM fine-tune

If you want to upload your model weights to Huggingface after training, create .env file and define HF_TOKEN variable in it with your Huggungface token.

You can also put WANDB_API_KEY there to track training metrics in wandb.

Run training session:

python gptchain.py train -m unsloth/llama-3-70b-bnb-4bit \
	--dataset-name tagengo_gpt4 \
	--save-path checkpoints/llama-3-70b-tagengo \
	--huggingface-repo llama-3-70b-tagengo \
	--max-steps 2400

Here, the base model is unsloth/llama-3-70b-bnb-4bit, dataset is tagengo_gpt4, final checkpoint will be stored in checkpoints/llama-3-70b-tagengo, weights will be uploaded in the Huggingface repo llama-3-70b-tagengo under your namespace. Maximum training steps is 2400, you can pass --num-epochs argument instead to set the number of training epochs.

LLM quantization

You can quantize your model and store it in gguf format.

python gptchain.py quant -m checkpoints/llama-3-70b-tagengo \
	--method q4_k_m \
	--save-path quants/llama-3-70b-tagengo \
	--huggingface-repo llama-3-70b-tagengo-GGUF

Quantization method used here is q4_k_m. All available options you can see here

LLM deployment

You can deploy your model to runpod.io with Text Generaion Inference. Change this variable in deploy/config.py:

POD_CONF = {
    "name": "gptchain-0.0.2",
    "image_name": "ghcr.io/huggingface/text-generation-inference:latest",
    "gpu_type_id": "NVIDIA RTX A6000",
    "cloud_type": "SECURE",
    "docker_args": f"--model-id l3utterfly/llama2-7b-layla --num-shard {GPU_COUNT}",
    "gpu_count": GPU_COUNT,
    "volume_in_gb": 50,
    "container_disk_in_gb": 20,
    "ports": "80/http,29500/http",
    "volume_mount_path": "/data",
}

Here you can set your pod name, GPU type - gpu_type_id, and other parameters. To deploy your model with the defined configuration, use this command:

python gptchain.py deploy-model -m ruslandev/llama-3-70b-tagengo

Retrieval Augmented Generation

To run RAG pipeline on custom data, you need to deploy an LLM with TGI server first (see the section above).

Prepare your data - let's say a file mydata.txt

Then you can use this command:

python gptchain.py rag --inference-url 'https://${POD_ID}-80.proxy.runpod.net' \
	--data-path mydata.txt \
	-q 'What is this text about?'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.