GithubHelp home page GithubHelp logo

nix-stable-diffusion's People

Contributors

gbtb avatar jb55 avatar rolfnic avatar toyo-chi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nix-stable-diffusion's Issues

Torch is not able to use GPU

when attempting to launch automatic1111 UI I'm met with the following:

[x@y:~/Code/etc/nix-stable-diffusion/stable-diffusion-webui]$ LD_PRELOAD="/run/opengl-driver/lib/libcuda.so" HSA_OVERRIDE_GFX_VERSION=10.3.0 python
3 launch.py
Python 3.10.7 (main, Sep  5 2022, 13:12:31) [GCC 11.3.0]
Commit hash: 737eb28faca8be2bb996ee0930ec77d1f7ebd939
Traceback (most recent call last):
  File "/home/bolt/Code/etc/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 205, in <module>
    prepare_enviroment()
  File "/home/bolt/Code/etc/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 151, in prepare_enviroment
    run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
  File "/home/bolt/Code/etc/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 57, in run_python
    return run(f'"{python}" -c "{code}"', desc, errdesc)
  File "/home/bolt/Code/etc/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 33, in run
    raise RuntimeError(message)
RuntimeError: Error running command.
Command: "/nix/store/wyhbl43ycqn43d08v5fqj1j6ynf7nz73-python3-3.10.7/bin/python3" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"
Error code: 1
stdout: <empty>
stderr: /nix/store/lvywargqhfhnmwhpk73zl2qy8qrbx0ql-python3.10-torch-1.12.1/lib/python3.10/site-packages/torch/cuda/__init__.py:83: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at  ../c10/hip/HIPFunctions.cpp:110.)
  return torch._C._cuda_getDeviceCount() > 0
Traceback (most recent call last):
  File "<string>", line 1, in <module>
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

nvidia-smi correctly shows my card from within the same shell:

[x@y:~/Code/etc/nix-stable-diffusion/stable-diffusion-webui]$ nvidia-smi
      
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.53       Driver Version: 525.53       CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:07:00.0  On |                  N/A |
|  0%   44C    P5    22W / 170W |    769MiB / 12288MiB |     16%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

launching with the torch CUDA test skipped will launch, but leads to a plethora of errors while loading or attempting to generate anything, probably the only interesting one of which is:

/nix/store/lvywargqhfhnmwhpk73zl2qy8qrbx0ql-python3.10-torch-1.12.1/lib/python3.10/site-packages/torch/cuda/__init__.py:83: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at  ../c10/hip/HIPFunctions.cpp:110.)
  return torch._C._cuda_getDeviceCount() > 0
Warning: caught exception 'Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice', memory monitor disabled

my system is currently using the beta nvidia drivers (525 instead of 520), but I didn't have any better luck from switching back to stable.

please let me know if there's any further information I can provide/tests to run/etc to help.

Error while building webui

$ nix develop .#webui.amd
HEAD ist jetzt bei 737eb28 typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir
error: Anwendung des Patches fehlgeschlagen: launch.py:86
error: launch.py: Patch konnte nicht angewendet werden
error: Anwendung des Patches fehlgeschlagen: modules/paths.py:19
error: modules/paths.py: Patch konnte nicht angewendet werden
$ # put model checkpoint in place
$ HSA_OVERRIDE_GFX_VERSION=10.3.0 python launch.py
Python 3.10.7 (main, Sep  5 2022, 13:12:31) [GCC 11.3.0]
Commit hash: 737eb28faca8be2bb996ee0930ec77d1f7ebd939
Traceback (most recent call last):
  File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 205, in <module>
    prepare_enviroment()
  File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 179, in prepare_enviroment
    git_clone(stable_diffusion_repo, repo_dir('stable-diffusion'), "Stable Diffusion", stable_diffusion_commit_hash)
  File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 76, in git_clone
    current_hash = run(f'"{git}" -C {dir} rev-parse HEAD', None, f"Couldn't determine {name}'s hash: {commithash}").strip()
  File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 33, in run
    raise RuntimeError(message)
RuntimeError: Couldn't determine Stable Diffusion's hash: 69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc.
Command: "git" -C repositories/stable-diffusion rev-parse HEAD
Error code: 128
stdout: <empty>
stderr: fatal: Kein Git-Repository (oder irgendeines der Elternverzeichnisse): .git

Translation: Not a git repository.

How to enable ROCm support for some AMD GPUs (hipErrorNoBinaryForGpu)

I followed your instructions, but they didn't work:

$ nix develop .#invokeai.amd

$ python scripts/preload_models.py 
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"
Aborted (Speicherabzug geschrieben)

THANK YOU!

I've been learning Nix and just found this project. You're awesome. Thank you for this work!

Model weight initialization

Let's track here all the stuff that models are downloading by themselves. It's mostly model's weights. I don't know whether we should to pre-download it ourselves, but I certainly want to avoid possible duplicate downloads and place them all in a single folder

webui

downloads weights lazily, when the feature is first used

Face restoration:
Downloading: "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/GFPGAN/detection_Resnet50_Final.pth
Downloading: "https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/GFPGAN/parsing_parsenet.pth
Downloading: "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/GFPGAN/GFPGANv1.4.pth


Extras:
Downloading: "https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/Codeformer/codeformer-v0.1.0.pth
Downloading: "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/facelib/weights/facelib/detection_Resnet50_Final.pth
Downloading: "https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_parsenet.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/facelib/weights/facelib/parsing_parsenet.pth
Downloading: "https://github.com/cszn/KAIR/releases/download/v1.0/ESRGAN.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/ESRGAN/ESRGAN_4x.pth

InvokeAI

TODO: run preload_models.py again

SD

sd checkpoints are currently downloaded manually.
Personally I grabbed them from torrent because I didn't want to register on whatever sketchy site they used for hosting it. Is there a fetchFromMagnet in nixpkgs?

Update InvokeAI to 2.3.5 (LoRA support!)

Can't believe that no one else isn't interested in updating InvokeAI on this great flake to at least v2.3.5, which contains such a significant changes as LoRA support!

Updating/adding dependencies

Hi,

I'm trying to update the automatic1111 UI to the latest commit. This should provide nice benefits like sd2.0 support, sizeable reduction in VRAM use, and up to 1.5x reported inference speed increase from transformers library.

I'm more than happy to try taking on the packaging and updating of additional libraries here myself, but pynixify documentation is quite sparse, and it doesn't understand git+... syntax, so I don't understand how you achieved packaging k-diffusion, for example (obviously the nix file itself is simple enough but I'd like to follow the correct workflow).

Would you be able to provide any guidance on your preferred way to achieve this with pynixify? I'm happy to figure out the rest of the bump, and I intend to continue updating for my own use as long as SD has my interest :)

Also, sorry if it's inappropriate to be making an issue for this - I did try to find you on some community hubs but without luck.

Thanks!

Tries to build something with pip ("No module named pip")

After applying the patches manually (#8) and setting HSA_OVERRIDE_GFX_VERSION=10.3.0 (#7), I'm facing this issue:

nix-stable-diffusion/stable-diffusion-webui]$ HSA_OVERRIDE_GFX_VERSION=10.3.0 python launch.py 
Python 3.10.7 (main, Sep  5 2022, 13:12:31) [GCC 11.3.0]
Commit hash: 737eb28faca8be2bb996ee0930ec77d1f7ebd939
Installing requirements for Web UI
Traceback (most recent call last):
  File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 205, in <module>
    prepare_enviroment()
  File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 188, in prepare_enviroment
    run_pip(f"install -r {requirements_file}", "requirements for Web UI")
  File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 62, in run_pip
    return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
  File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 33, in run
    raise RuntimeError(message)
RuntimeError: Couldn't install requirements for Web UI.
Command: "/nix/store/wyhbl43ycqn43d08v5fqj1j6ynf7nz73-python3-3.10.7/bin/python" -m pip install -r requirements_versions.txt --prefer-binary
Error code: 1
stdout: <empty>
stderr: /nix/store/wyhbl43ycqn43d08v5fqj1j6ynf7nz73-python3-3.10.7/bin/python: No module named pip

It seems that it still wants to install dependencies for webui. I would have expected these to be installed by the nix develop shell.

Nvidia support

I have an Nvidia GPU, if you put up an Nvidia branch I'd happily test it for you. In the meantime I'll take a jab at converting it myself, but I've never worked with a flake like this before so idk how far I'll get on it.

Create `package` outputs for invoke AI, webui, preload-models script

This is already mentioned as a potential goal in the README, but just thought I'd open an issue to track/discuss!

It would be wicked to turn both invoke AI and the webui into packages. I guess it's kind of unclear how to expose the model-download step which requires quite a bit of interactivity (making the huggingface account, generating a token, etc).

I guess the model download isn't an essential step to run invoke though - maybe running that step could also be it's own .#preload-models "package"?

Hash mismatch error

When trying to run nix run .#invokeai.amd -- --web --root_dir "path/to/dir" I get error:

error: hash mismatch in fixed-output derivation '/nix/store/x61qnng9yshji0f8nddqvidrjf68dkkh-safetensors-0.3.1-vendor.tar.gz.drv':
         specified: sha256-3SluST4muwNxgt+GQ6ZuZ62TfMr5ZYiYN9M0QyhmsWc=
            got:    sha256-jpFjZk5iNiLXbg2fAy08c5scruW9rW36NDAVzZEBQUE=
error: 1 dependencies of derivation '/nix/store/ak2fhwd3dvcx5h5smpz1lzgbjl6clk3g-python3.10-safetensors-0.3.1.drv' failed to build
error: 1 dependencies of derivation '/nix/store/nwgwwdrksh7dk7gwss8g3v9w27wia41d-invokeai-2.3.5.drv' failed to build

The same things happen with Automatic1111 webui.

Fails to build safetensors due to hash mismatch

When I try running the invokeai.amd output (using nix run .#invokeai.amd -- --web --root_dir "root") it results in the following error.

error: hash mismatch in fixed-output derivation '/nix/store/80nhd3bcsfzliwjkamiln206fl5gp020-safetensors-0.2.8-vendor.tar.gz.drv':
         specified: sha256-0yE18d+jRs5IodacuBIsmUeZJcZobLz9oLWL+tZKY18=
            got:    sha256-ylpf82NXlpo4+u5HZVYeJI8I6VBFAukzC7Er6BZk1Ik=
error: 1 dependencies of derivation '/nix/store/9gjlvxjy4ilmmh56704khkvy2divvsbk-python3.10-safetensors-0.2.8.drv' failed to build
error: 1 dependencies of derivation '/nix/store/i8hwr4lccbc24lqgy9r893lgnzrqi3id-invokeai-2.3.1.drv' failed to build

vNext

To-Do list

WebUI - allow specifying output dir

Currently webui tries to write to output/log/..., which is a read-only destination in the nix-store, when saving results. We'd need a way pass another location to the script, or if it already exists add it to the documentation.

Some patches don't apply cleanly

$ nix develop .#webui.amd
HEAD ist jetzt bei 737eb28 typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir
error: Anwendung des Patches fehlgeschlagen: launch.py:86
error: launch.py: Patch konnte nicht angewendet werden
error: Anwendung des Patches fehlgeschlagen: modules/paths.py:19
error: modules/paths.py: Patch konnte nicht angewendet werden

Translation: Patches couldn't be applied.

invokeai.amd not working

$ python scripts/preload_models.py
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.