Comments (5)
What OS, GPU, GPU RAM are you using? You really could paste the console log in here. That would be helpful as well.
from easydiffusion.
You need to add more information. In there's a console that opens when you start ED are there any errors in the console?
from easydiffusion.
You need to add more information. In there's a console that opens when you start ED are there any errors in the console?
No, it just stays at 0% without displaying an error or advancing.
from easydiffusion.
COMSPEC=C:\WINDOWS\system32\cmd.exe
AdapterRAM DriverDate DriverVersion Name
4293918720 20230712000000.000000-000 31.0.15.3667 NVIDIA GeForce GTX 1650
13:11:16.527 INFO cuda:0 Running on diffusers: {'guidance_scale': 7.5, 'generator': image_generator.py:449 <torch._C.Generator object at 0x0000029D415FB1F0>, 'width': 512, 'height': 512, 'num_inference_steps': 25, 'num_images_per_prompt': 1, 'callback': <function make_with_diffusers.<locals>.<lambda> at 0x0000029D4154F820>, 'prompt_embeds': tensor([[[-0.3885, 0.0230, -0.0521, ..., -0.4901, -0.3065, 0.0674], [ 0.0291, -1.3254, 0.3089, ..., -0.5258, 0.9764, 0.6651], [ 0.4593, 0.5618, 1.6670, ..., -1.9512, -1.2308, 0.0108], ..., [-3.0420, -0.0669, -0.1804, ..., 0.3959, -0.0201, 0.7659], [-3.0549, -0.1048, -0.1947, ..., 0.4253, -0.0201, 0.7571], [-2.9853, -0.0844, -0.1724, ..., 0.4369, 0.0086, 0.7482]]], device='cuda:0'), 'negative_prompt_embeds': tensor([[[-0.3885, 0.0230, -0.0521, ..., -0.4901, -0.3065, 0.0674], [-0.3714, -1.4495, -0.3403, ..., 0.9483, 0.1865, -1.1034], [-0.5111, -1.4629, -0.2927, ..., 1.0414, 0.0699, -1.0284], ..., [ 0.5003, -0.9563, -0.6622, ..., 1.6000, -1.0629, -0.2188], [ 0.4986, -0.9462, -0.6668, ..., 1.6453, -1.0865, -0.2085], [ 0.4921, -0.8134, -0.4922, ..., 1.6096, -1.0179, -0.2480]]], device='cuda:0')} 0%| | 0/25 [00:00<?, ?it/s]
from easydiffusion.
Ok I'm actually using a 1660ti it has 6GB of RAM, that being said there's a few things to be aware of. Whichever Stable Diffusion app you use, you're going to have to use Low settings. If I use Medium settings it gets painfully slow. Another thing, for some reason NVidia didn't implement proper FP16 into the 1600 series of GPUs so you have to use FP32, which also makes image generation a little slower. Some apps like Automatic1111 and some models ask about floating points, FP16 would be slightly faster with a little less accuracy.
So in the console, you should see INFO cuda:0 Setting cuda:0 as active, with precision: full
and INFO cuda:0 using attn_precision: fp32 and in Settings GPU Memory Usage should be Low. For a small speed boost, turn off Show a Live Preview. Also, for the inference steps usually 8-12 steps can give you a decent image, at 16-20 steps it may generate a different but more defined image, above that it's usually the same picture you saw in 16-20 steps but with more detail. If you download the SDXL Turbo model you can generate max quality images with 2-5 steps in Easy Diffusion, 1-3 steps with other apps, but SDXL models are larger so the steps take longer.
Also, make sure your GPU drivers are up to date.
FYI Using SD 1.4-1.5 I generate 10 images in between 1:40-4 minutes doing up to 10 in parallel at 12 steps. More in parallel is faster, I've done up to 20 in parallel with ED haven't done that with other apps.
from easydiffusion.
Related Issues (20)
- Upscaler tab? HOT 1
- A way to save prompts? HOT 6
- шибка: ошибка CUDA: образ ядра недоступен для выполнения на устройстве, сообщения об ошибках ядра CUDA могут асинхронно передаваться при каком-либо другом вызове API, поэтому приведенный ниже stacktrace может быть неверным. Для отладки рассмотрите возможность передачи CUDA_LAUNCH_BLOCKING = 1. Скомпилируйте с помощью `TORCH_USE_CUDA_DSA`, чтобы включить утверждения на стороне устройства.
- Option that each expanded batch gets a different random seed
- Lora improvements HOT 5
- Make Similar Images feature uses different values for Inference Steps and Guidance Scale HOT 5
- bzip is required, but installer happily ignores it
- Problem with Easy Diffusion, please help me
- Windows Installer Hangs
- Input type (c10::Half) and bias type (float) should be the same
- Add Openpose Editor
- This is for the pluggin ScaleUp, by Avidgamefan HOT 24
- ERROR: Exception in ASGI application HOT 1
- The Compel .and() syntax seems broken for SDXL
- CUDA error HOT 1
- Error when: Attempting to upscale with Xformer
- High res fix HOT 1
- Is it possible to use LyCoris models with this? HOT 1
- Binding to a different port and address other than 127.0.0.1:9000 HOT 3
- Local network can not access problem HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from easydiffusion.