gbtb / nix-stable-diffusion Goto Github PK
View Code? Open in Web Editor NEWFlake for running SD on NixOS
Flake for running SD on NixOS
when attempting to launch automatic1111 UI I'm met with the following:
[x@y:~/Code/etc/nix-stable-diffusion/stable-diffusion-webui]$ LD_PRELOAD="/run/opengl-driver/lib/libcuda.so" HSA_OVERRIDE_GFX_VERSION=10.3.0 python
3 launch.py
Python 3.10.7 (main, Sep 5 2022, 13:12:31) [GCC 11.3.0]
Commit hash: 737eb28faca8be2bb996ee0930ec77d1f7ebd939
Traceback (most recent call last):
File "/home/bolt/Code/etc/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 205, in <module>
prepare_enviroment()
File "/home/bolt/Code/etc/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 151, in prepare_enviroment
run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
File "/home/bolt/Code/etc/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 57, in run_python
return run(f'"{python}" -c "{code}"', desc, errdesc)
File "/home/bolt/Code/etc/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 33, in run
raise RuntimeError(message)
RuntimeError: Error running command.
Command: "/nix/store/wyhbl43ycqn43d08v5fqj1j6ynf7nz73-python3-3.10.7/bin/python3" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"
Error code: 1
stdout: <empty>
stderr: /nix/store/lvywargqhfhnmwhpk73zl2qy8qrbx0ql-python3.10-torch-1.12.1/lib/python3.10/site-packages/torch/cuda/__init__.py:83: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at ../c10/hip/HIPFunctions.cpp:110.)
return torch._C._cuda_getDeviceCount() > 0
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
nvidia-smi correctly shows my card from within the same shell:
[x@y:~/Code/etc/nix-stable-diffusion/stable-diffusion-webui]$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.53 Driver Version: 525.53 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:07:00.0 On | N/A |
| 0% 44C P5 22W / 170W | 769MiB / 12288MiB | 16% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
launching with the torch CUDA test skipped will launch, but leads to a plethora of errors while loading or attempting to generate anything, probably the only interesting one of which is:
/nix/store/lvywargqhfhnmwhpk73zl2qy8qrbx0ql-python3.10-torch-1.12.1/lib/python3.10/site-packages/torch/cuda/__init__.py:83: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at ../c10/hip/HIPFunctions.cpp:110.)
return torch._C._cuda_getDeviceCount() > 0
Warning: caught exception 'Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice', memory monitor disabled
my system is currently using the beta nvidia drivers (525 instead of 520), but I didn't have any better luck from switching back to stable.
please let me know if there's any further information I can provide/tests to run/etc to help.
$ nix develop .#webui.amd
HEAD ist jetzt bei 737eb28 typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir
error: Anwendung des Patches fehlgeschlagen: launch.py:86
error: launch.py: Patch konnte nicht angewendet werden
error: Anwendung des Patches fehlgeschlagen: modules/paths.py:19
error: modules/paths.py: Patch konnte nicht angewendet werden
$ # put model checkpoint in place
$ HSA_OVERRIDE_GFX_VERSION=10.3.0 python launch.py
Python 3.10.7 (main, Sep 5 2022, 13:12:31) [GCC 11.3.0]
Commit hash: 737eb28faca8be2bb996ee0930ec77d1f7ebd939
Traceback (most recent call last):
File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 205, in <module>
prepare_enviroment()
File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 179, in prepare_enviroment
git_clone(stable_diffusion_repo, repo_dir('stable-diffusion'), "Stable Diffusion", stable_diffusion_commit_hash)
File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 76, in git_clone
current_hash = run(f'"{git}" -C {dir} rev-parse HEAD', None, f"Couldn't determine {name}'s hash: {commithash}").strip()
File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 33, in run
raise RuntimeError(message)
RuntimeError: Couldn't determine Stable Diffusion's hash: 69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc.
Command: "git" -C repositories/stable-diffusion rev-parse HEAD
Error code: 128
stdout: <empty>
stderr: fatal: Kein Git-Repository (oder irgendeines der Elternverzeichnisse): .git
Translation: Not a git repository.
I followed your instructions, but they didn't work:
$ nix develop .#invokeai.amd
$ python scripts/preload_models.py
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"
Aborted (Speicherabzug geschrieben)
I've been learning Nix and just found this project. You're awesome. Thank you for this work!
Let's track here all the stuff that models are downloading by themselves. It's mostly model's weights. I don't know whether we should to pre-download it ourselves, but I certainly want to avoid possible duplicate downloads and place them all in a single folder
downloads weights lazily, when the feature is first used
Face restoration:
Downloading: "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/GFPGAN/detection_Resnet50_Final.pth
Downloading: "https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/GFPGAN/parsing_parsenet.pth
Downloading: "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/GFPGAN/GFPGANv1.4.pth
Extras:
Downloading: "https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/Codeformer/codeformer-v0.1.0.pth
Downloading: "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/facelib/weights/facelib/detection_Resnet50_Final.pth
Downloading: "https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_parsenet.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/facelib/weights/facelib/parsing_parsenet.pth
Downloading: "https://github.com/cszn/KAIR/releases/download/v1.0/ESRGAN.pth" to /NVME/Projects/nix-stable-diffusion/stable-diffusion-webui/models/ESRGAN/ESRGAN_4x.pth
TODO: run preload_models.py again
sd checkpoints are currently downloaded manually.
Personally I grabbed them from torrent because I didn't want to register on whatever sketchy site they used for hosting it. Is there a fetchFromMagnet in nixpkgs?
Can't believe that no one else isn't interested in updating InvokeAI on this great flake to at least v2.3.5, which contains such a significant changes as LoRA support!
Hi,
I'm trying to update the automatic1111 UI to the latest commit. This should provide nice benefits like sd2.0 support, sizeable reduction in VRAM use, and up to 1.5x reported inference speed increase from transformers library.
I'm more than happy to try taking on the packaging and updating of additional libraries here myself, but pynixify documentation is quite sparse, and it doesn't understand git+...
syntax, so I don't understand how you achieved packaging k-diffusion
, for example (obviously the nix file itself is simple enough but I'd like to follow the correct workflow).
Would you be able to provide any guidance on your preferred way to achieve this with pynixify? I'm happy to figure out the rest of the bump, and I intend to continue updating for my own use as long as SD has my interest :)
Also, sorry if it's inappropriate to be making an issue for this - I did try to find you on some community hubs but without luck.
Thanks!
After applying the patches manually (#8) and setting HSA_OVERRIDE_GFX_VERSION=10.3.0
(#7), I'm facing this issue:
nix-stable-diffusion/stable-diffusion-webui]$ HSA_OVERRIDE_GFX_VERSION=10.3.0 python launch.py
Python 3.10.7 (main, Sep 5 2022, 13:12:31) [GCC 11.3.0]
Commit hash: 737eb28faca8be2bb996ee0930ec77d1f7ebd939
Installing requirements for Web UI
Traceback (most recent call last):
File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 205, in <module>
prepare_enviroment()
File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 188, in prepare_enviroment
run_pip(f"install -r {requirements_file}", "requirements for Web UI")
File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 62, in run_pip
return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
File "/home/turion/python/nix-stable-diffusion/stable-diffusion-webui/launch.py", line 33, in run
raise RuntimeError(message)
RuntimeError: Couldn't install requirements for Web UI.
Command: "/nix/store/wyhbl43ycqn43d08v5fqj1j6ynf7nz73-python3-3.10.7/bin/python" -m pip install -r requirements_versions.txt --prefer-binary
Error code: 1
stdout: <empty>
stderr: /nix/store/wyhbl43ycqn43d08v5fqj1j6ynf7nz73-python3-3.10.7/bin/python: No module named pip
It seems that it still wants to install dependencies for webui. I would have expected these to be installed by the nix develop
shell.
I have an Nvidia GPU, if you put up an Nvidia branch I'd happily test it for you. In the meantime I'll take a jab at converting it myself, but I've never worked with a flake like this before so idk how far I'll get on it.
This is already mentioned as a potential goal in the README, but just thought I'd open an issue to track/discuss!
It would be wicked to turn both invoke AI and the webui into packages. I guess it's kind of unclear how to expose the model-download step which requires quite a bit of interactivity (making the huggingface account, generating a token, etc).
I guess the model download isn't an essential step to run invoke though - maybe running that step could also be it's own .#preload-models
"package"?
Add support for stable-diffusion-webui.
stable-diffusion-webui has far more features and community support than InvokeAI does.
When trying to run nix run .#invokeai.amd -- --web --root_dir "path/to/dir"
I get error:
error: hash mismatch in fixed-output derivation '/nix/store/x61qnng9yshji0f8nddqvidrjf68dkkh-safetensors-0.3.1-vendor.tar.gz.drv':
specified: sha256-3SluST4muwNxgt+GQ6ZuZ62TfMr5ZYiYN9M0QyhmsWc=
got: sha256-jpFjZk5iNiLXbg2fAy08c5scruW9rW36NDAVzZEBQUE=
error: 1 dependencies of derivation '/nix/store/ak2fhwd3dvcx5h5smpz1lzgbjl6clk3g-python3.10-safetensors-0.3.1.drv' failed to build
error: 1 dependencies of derivation '/nix/store/nwgwwdrksh7dk7gwss8g3v9w27wia41d-invokeai-2.3.5.drv' failed to build
The same things happen with Automatic1111 webui.
When I try running the invokeai.amd output (using nix run .#invokeai.amd -- --web --root_dir "root"
) it results in the following error.
error: hash mismatch in fixed-output derivation '/nix/store/80nhd3bcsfzliwjkamiln206fl5gp020-safetensors-0.2.8-vendor.tar.gz.drv':
specified: sha256-0yE18d+jRs5IodacuBIsmUeZJcZobLz9oLWL+tZKY18=
got: sha256-ylpf82NXlpo4+u5HZVYeJI8I6VBFAukzC7Er6BZk1Ik=
error: 1 dependencies of derivation '/nix/store/9gjlvxjy4ilmmh56704khkvy2divvsbk-python3.10-safetensors-0.2.8.drv' failed to build
error: 1 dependencies of derivation '/nix/store/i8hwr4lccbc24lqgy9r893lgnzrqi3id-invokeai-2.3.1.drv' failed to build
To-Do list
Currently webui tries to write to output/log/...
, which is a read-only destination in the nix-store, when saving results. We'd need a way pass another location to the script, or if it already exists add it to the documentation.
$ nix develop .#webui.amd
HEAD ist jetzt bei 737eb28 typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir
error: Anwendung des Patches fehlgeschlagen: launch.py:86
error: launch.py: Patch konnte nicht angewendet werden
error: Anwendung des Patches fehlgeschlagen: modules/paths.py:19
error: modules/paths.py: Patch konnte nicht angewendet werden
Translation: Patches couldn't be applied.
$ python scripts/preload_models.py
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.