GithubHelp home page GithubHelp logo

mtlyab / ddpo Goto Github PK

View Code? Open in Web Editor NEW

This project forked from jannerm/ddpo

0.0 0.0 0.0 95 KB

Code for the paper "Training Diffusion Models with Reinforcement Learning"

Home Page: https://rl-diffusion.github.io

License: MIT License

Shell 1.25% Python 98.75%

ddpo's Introduction

Denoising Diffusion Policy Optimization

Training code for the paper Training Diffusion Models with Reinforcement Learning. This codebase has been tested on Google Cloud TPUs (v3 for RWR and v4 for DDPO); it has not been tested on GPUs.

UPDATE: We now have a PyTorch implementation that supports GPUs and LoRA for low-memory training here!

prompt_fn filter_field Weights and Demo
imagenet_animals jpeg ddpo-compressibility
imagenet_animals neg_jpeg ddpo-incompressibility
from_file(assets/common_animals.txt) aesthetic ddpo-aesthetic
nouns_activities(assets/common_animals.txt, assets/activities_v0.txt) llava_bertscore ddpo-alignment

Installation

conda env create -f environment_tpu.yml
conda activate ddpo-tpu
pip install -e .

Running DDPO

python pipeline/policy_gradient.py --dataset compressed-animals

The --dataset flag can be replaced by any of the configs defined in config/base.py. The first config dict, base, defines common arguments that are overridden in specific configs further down. Some arguments are shared between methods; DDPO-specific hyperparameters are in the pg field.

The most important arguments are prompt_fn and filter_field, which define the prompt distribution and reward function, respectively. See training/prompts.py for prompt functions and training/callbacks.py for reward functions.

Running RWR

For standard RWR, where the weights are a softmax of the rewards:

bash pipeline/run-rwr.sh

For RWR-sparse, where only samples above a certain percentile of the reward distribution are kept and trained on:

bash pipeline/run-sparse.sh

These methods run the outermost training loop in bash rather than Python. They run the pipeline/sample.py script to collect a dataset of samples and rewards, run pipeline/finetune.py to train the model on the most recent dataset, and repeat for some number of iterations. The sampling step and finetuning step have different configs, which are labeled "sample" and "train", respectively, in config/base.py.

Running LLaVA Inference

LLaVA inference was performed by making HTTP requests to a separate GPU server. See the llava_bertscore reward function in training/callbacks.py for the client-side code, and this repo for the server-side code.

Reference

@inproceedings{black2023ddpo,
      title={Training Diffusion Models with Reinforcement Learning},
      author={Kevin Black and Michael Janner and Yilun Du and Ilya Kostrikov and Sergey Levine},
      year={2023},
      eprint={2305.13301},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

ddpo's People

Contributors

kvablack avatar anonymized-ddpo avatar jannerm avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.