GithubHelp home page GithubHelp logo

alekseykorshuk / huggingnft Goto Github PK

View Code? Open in Web Editor NEW
139.0 5.0 30.0 13.2 MB

Generate NFT or train new model in just few clicks! Train as much as you can, others will resume from checkpoint!

Home Page: https://huggingface.co/spaces/huggan/huggingnft

License: Apache License 2.0

Python 1.30% Jupyter Notebook 98.70%
gan nft huggan huggingface

huggingnft's Introduction

Hugging NFT

Banner


Hugging NFT โ€” generate NFT or train new model in just few clicks! Train as much as you can, others will resume from checkpoint!


Example

๐Ÿค— More examples are available here: EXAMPLES.md.

This preview does not show the real power of this project because of a strong decrease in video quality! Otherwise, the file size would exceed all limits.

How to generate

Space

You can easily use Space: link

Open in Streamlit

Images and Interpolation

Google Colab

Open In Colab

Follow this link: link

Terminal

Image

python huggingnft/lightweight_gan/generate_image.py --collection_name cryptopunks --nrows 8 --generation_type default

Interpolation

python huggingnft/lightweight_gan/generate_interpolation.py --collection_name cryptopunks --nrows 8 --num_steps 100

Python code

Image

from huggingnft.lightweight_gan.train import timestamped_filename
from huggingnft.lightweight_gan.lightweight_gan import load_lightweight_model

collection_name = "cyberkongz"
nrows = 8
generation_type = "default"  # ["default", "ema"]

model = load_lightweight_model(f"huggingnft/{collection_name}")
image_saved_path, generated_image = model.generate_app(
    num=timestamped_filename(),
    nrow=nrows,
    checkpoint=-1,
    types=generation_type
)

Interpolation

from huggingnft.lightweight_gan.train import timestamped_filename
from huggingnft.lightweight_gan.lightweight_gan import load_lightweight_model

collection_name = "cyberkongz"
nrows = 1
num_steps = 100

model = load_lightweight_model(f"huggingnft/{collection_name}")
gif_saved_path = model.generate_interpolation(
    num=timestamped_filename(),
    num_image_tiles=nrows,
    num_steps=num_steps,
    save_frames=False
)

How to train

You can easily add new model for any OpenSea collection. Note that it is important to collect dataset before training โ€” check the corresponding section.

Google Colab

Open In Colab

Follow this link: link

Terminal

You can now run script as follows:

accelerate config

=> Accelerate will ask what kind of environment you'd like to run your script on, simply answer the questions being asked. Next:

accelerate launch huggingnft/lightweight_gan/train.py \
  --wandb \
  --image_size 256 \
  --num_train_steps 10000 \
  --save_every 1000 \
  --dataset_name huggingnft/cyberkongz \
  --push_to_hub \
  --name cyberkongz \
  --organization_name huggingnft

Collection2Collection

The collection2collection framework allows to create unpaired image translation models between any pair of NFT collections that can be downloaded from Opensea.
In the broadest sense, it allows to apply the style of a collection to that of another one, so as to obtain new and diverse collections of never before seen NFTs.

Jupyter notebook

The training procedure is provided in a simplified format in the jupyter notebook train_cyclegans.ipynb

here, hyperparameter optimization is available by adding multiple parameters to each list of hyperparameters shown in the notebook.
Furthermore, a section in such notebook is dedicated to the training of all possible translations by means of the datasets provided in the huggingnft organization page

Google Colab

Open In Colab

Terminal

Firstly, after cloning this repository, run

cd huggingnft
pip install .

Then, set the wandb API_KEY if you wish to log all results to wandb with:

wandb login API_KEY

If you plan on uploading the resulting models to an huggingface repository, make sure to also login with your huggingface API_KEY with the following command:

huggingface-cli login 

Before starting the model training, it is necessary to configure the accelerate environment according to your available computing resource with the command:

accelerate config

After this, everything is setup to start training the collection2collection models

accelerate launch --config_file ~/.cache/huggingface/accelerate/default_config.yaml \
        train.py \
        --batch_size 8 \
        --beta1 0.5 \
        --beta2 0.999 \
        --channels 3 \ 
        --checkpoint_interval 5 \
        --decay_epoch 80 \
        --epoch 0 \
        --image_size 256 \
        --lambda_cyc 10.0 \
        --lambda_id 5.0 \ 
        --lr 0.0002 \
        --mixed_precision no \
        --model_name cyclegan \
        --n_residual_blocks 9 \
        --num_epochs 200 \
        --num_workers 8 \
        --organization_name huggingnft \
        --push_to_hub \
        --sample_interval 10 \
        --source_dataset_name huggingnft/azuki \ 
        --target_dataset_name huggingnft/boredapeyachtclub \
        --wandb \
        --output_dir experiments

Generate collection2collection examples

Head to the huggingnft cyclegan subfolder and utilize the generate.py script to create NFTs with the collection2collection models at huggingNFT

Generate one NFT

To generate a collection of num_images NFTs which are the outputs of the generation+translation pipeline do:

python3 generate.py --choice generate \
    --num_tiles 1 \
    --num_images 1 \
    --format png

Generate multiple NFTs

To generate a gif containing pairs of generated and the corresponding translated NFTs (both do not exist and are the predictions of GANs), set --format png to save each image separately instead of a condensed gif

python3 generate.py \
    --choice generate \
    --num_tiles 1 \ 
    --num_images 100 \
    --format gif \
    --pairs

Visualize multiple NFTs, side by side comparison of generated and resulting translation

To generate a gif containing pairs of generated and the corresponding translated NFTs (both do not exist and are the predictions of GANs) this command allows to observe how contiguous changes in the latent space of the upstream GAN which generates the samples, affect the following translation by the CycleGAN

Set --format png to save each image separately instead of a condensed gif

python3 generate.py --choice interpolate \
    --num_tiles 16 \
    --num_images 100\ 
    --format gif\
     --pairs

Visualize multiple NFTs, only the resulting translation

Remove the --pairs argument in order to visualize only the result of the translation

python3 generate.py --choice interpolate \
    --num_tiles 16 \
    --num_images 100\ 
    --format gif\

Collect dataset

Because OpenSea usually blocks any api connection, we are going to use Selenium to parse data. So first download chromedriver from here and pass corresponding path:

python huggingnft/datasets/collect_dataset.py --collection_name cyberkongz --use_selenium --driver_path huggingnft/datasets/chromedriver

Model overfitting

There is a possibility that you can overtrain the model. In such case you can revert best commit with this notebook: link

Open In Colab

With great power comes great responsibility!

About

Built by Aleksey Korshuk, Christian Cancedda and Hugging Face community with love โค๏ธ

Follow Follow

Follow Follow

Star project repository:

GitHub stars

huggingnft's People

Contributors

alekseykorshuk avatar chris1nexus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

huggingnft's Issues

Bug when training collection2collection model

Hello,

First of all thanks for this project, funny way to discover GANs !

When launching the training for the collection2collection model from the CLI, I have the following error:

Reusing dataset parquet (C:\Users\33667\.cache\huggingface\datasets\maurya___parquet\maurya--gc-d26997f7b4f4065f\0.0.0\7328ef7ee03eaf3f86ae40594d46a1cec86161704e02dd19f232d81eee72ade8)
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00,  4.90it/s]
Using custom data configuration maurya--gc-d26997f7b4f4065f
Reusing dataset parquet (C:\Users\33667\.cache\huggingface\datasets\maurya___parquet\maurya--gc-d26997f7b4f4065f\0.0.0\7328ef7ee03eaf3f86ae40594d46a1cec86161704e02dd19f232d81eee72ade8)
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 36.97it/s]
Starting training
Traceback (most recent call last):
  File "C:\Users\33667\Documents\huggingnft\huggingnft\cyclegan\train.py", line 470, in <module>
    main()
  File "C:\Users\33667\Documents\huggingnft\huggingnft\cyclegan\train.py", line 466, in main
    training_function({}, args)
  File "C:\Users\33667\Documents\huggingnft\huggingnft\cyclegan\train.py", line 225, in training_function
    for i, (source_batch, target_batch) in enumerate(zip(loaders['train']['source'], loaders['train']['target']) ):
  File "C:\Users\33667\anaconda3\envs\nft\lib\site-packages\accelerate\data_loader.py", line 301, in __iter__
    for batch in super().__iter__():
  File "C:\Users\33667\anaconda3\envs\nft\lib\site-packages\torch\utils\data\dataloader.py", line 438, in __iter__
    return self._get_iterator()
  File "C:\Users\33667\anaconda3\envs\nft\lib\site-packages\torch\utils\data\dataloader.py", line 384, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\33667\anaconda3\envs\nft\lib\site-packages\torch\utils\data\dataloader.py", line 1048, in __init__
    w.start()
  File "C:\Users\33667\anaconda3\envs\nft\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "C:\Users\33667\anaconda3\envs\nft\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\33667\anaconda3\envs\nft\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "C:\Users\33667\anaconda3\envs\nft\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\33667\anaconda3\envs\nft\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'training_function.<locals>.transforms'
Traceback (most recent call last):
  File "C:\Users\33667\anaconda3\envs\nft\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\33667\anaconda3\envs\nft\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\33667\anaconda3\envs\nft\Scripts\accelerate.exe\__main__.py", line 7, in <module>
  File "C:\Users\33667\anaconda3\envs\nft\lib\site-packages\accelerate\commands\accelerate_cli.py", line 43, in main
    args.func(args)
  File "C:\Users\33667\anaconda3\envs\nft\lib\site-packages\accelerate\commands\launch.py", line 568, in launch_command
    simple_launcher(args)
  File "C:\Users\33667\anaconda3\envs\nft\lib\site-packages\accelerate\commands\launch.py", line 250, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['C:\\Users\\33667\\anaconda3\\envs\\nft\\python.exe', 'train.py', '--batch_size', '8', '--beta1', '0.5', '--beta2', '0.999', '--channels', '3', '--checkpoint_interval', '5', '--decay_epoch', '80', '--epoch', '0', '--image_size', '256', '--lambda_cyc', '10.0', '--lambda_id', '5.0', '--lr', '0.0002', '--mixed_precision', 'no', '--n_residual_blocks', '9', '--num_epochs', '200', '--num_workers', '8', '--sample_interval', '10', '--source_dataset_name', 'maurya/gc', '--target_dataset_name', 'maurya/gc', '--output_dir', 'experiments']' returned non-zero exit status 1.
(nft) PS C:\Users\33667\Documents\huggingnft\huggingnft\cyclegan> Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\33667\anaconda3\envs\nft\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:\Users\33667\anaconda3\envs\nft\lib\multiprocessing\spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

The traceback is unclear and I can't debug it. I tried also with your datasets and get the same issue. Running the lightweight gan works perfectly.

Thanks in advance for your answer.

Can't pickle local object 'CVDB.<locals>.transform_center_crop'

(pygpucuda) D:\python\Deep CS\DDN-master>python sr_test.py
Traceback (most recent call last):
File "sr_test.py", line 50, in
for data in dataloader:
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\site-packages\torch\utils\data\dataloader.py", line 438, in iter
return self._get_iterator()
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\site-packages\torch\utils\data\dataloader.py", line 384, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\site-packages\torch\utils\data\dataloader.py", line 1048, in init
w.start()
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'CVDB..transform_center_crop'

(pygpucuda) D:\python\Deep CS\DDN-master>Traceback (most recent call last):
File "", line 1, in
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\spawn.py", line 125, in _main
prepare(preparation_data)
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\runpy.py", line 265, in run_path
return _run_module_code(code, init_globals, run_name,
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "D:\python\Deep CS\DDN-master\sr_test.py", line 50, in
for data in dataloader:
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\site-packages\torch\utils\data\dataloader.py", line 438, in iter
return self._get_iterator()
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\site-packages\torch\utils\data\dataloader.py", line 384, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\site-packages\torch\utils\data\dataloader.py", line 1048, in init
w.start()
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\popen_spawn_win32.py", line 45, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "C:\Users\amits\anaconda3\envs\pygpucuda\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.