GithubHelp home page GithubHelp logo

philz1337x / clarity-upscaler Goto Github PK

View Code? Open in Web Editor NEW
2.5K 28.0 254.0 31.51 MB

Clarity AI | AI Image Upscaler & Enhancer - free and open-source Magnific Alternative

Home Page: https://ClarityAI.cc

License: GNU Affero General Public License v3.0

Python 88.39% JavaScript 7.84% CSS 1.17% HTML 1.96% Shell 0.53% Batchfile 0.12%
ai ai-art image-upscale image-upscaler image-upscaling image2image img2img stable-diffusion stable-diffusion-webui upscale

clarity-upscaler's People

Contributors

philz1337x avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clarity-upscaler's Issues

How to use it in Automatic?

Hei i really tried understanding your readme.. but i really dont get how to use it in Automatic. Just copy pasting the params doesnt work. It only applies the cfg, denoising and so on. But what about the rest? I tried going through the params one by one but i have no idea which control net to use or how to even select the correct upscaler

UnboundLocalError: local variable 'h' referenced before assignment

Thank you for amazing upscaler.

Sometimes I am getting this error.

Running prediction
[Tiled Diffusion] upscaling image with 4x-UltraSharp...
[Tiled Diffusion] ControlNet found, support is enabled.
2024-03-31 06:48:41,693 - ControlNet - �[0;32mINFO�[0m - unit_separate = False, style_align = False
2024-03-31 06:48:41,694 - ControlNet - �[0;32mINFO�[0m - Loading model from cache: control_v11f1e_sd15_tile
2024-03-31 06:48:41,717 - ControlNet - �[0;32mINFO�[0m - Using preprocessor: tile_resample
2024-03-31 06:48:41,717 - ControlNet - �[0;32mINFO�[0m - preprocessor resolution = 2400
2024-03-31 06:48:41,860 - ControlNet - �[0;32mINFO�[0m - ControlNet Hooked - Time = 0.17113852500915527
MultiDiffusion hooked into 'DPM++ 3M SDE Karras' sampler, Tile size: 144x112, Tile count: 12, Batch size: 6, Tile batches: 2 (ext: ContrlNet)
[Tiled VAE]: input_size: torch.Size([1, 3, 2400, 3200]), tile_size: 3072, padding: 32
[Tiled VAE]: split to 1x2 = 2 tiles. Optimal tile size 1568x2336, original tile size 3072x3072
[Tiled VAE]: Fast mode enabled, estimating group norm parameters on 3072 x 2304 image
MultiDiffusion Sampling:   0%|          | 0/20 [00:00<?, ?it/s]
[Tiled VAE]: Executing Encoder Task Queue:   0%|          | 0/182 [00:00<?, ?it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  10%|█         | 19/182 [00:00<00:03, 46.32it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  21%|██        | 38/182 [00:00<00:03, 45.68it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  24%|██▎       | 43/182 [00:01<00:03, 35.46it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  26%|██▋       | 48/182 [00:01<00:03, 37.12it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  29%|██▊       | 52/182 [00:01<00:06, 20.83it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  30%|███       | 55/182 [00:02<00:07, 16.42it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  32%|███▏      | 58/182 [00:02<00:08, 14.34it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  35%|███▌      | 64/182 [00:02<00:06, 18.53it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  38%|███▊      | 70/182 [00:02<00:05, 21.99it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  40%|████      | 73/182 [00:02<00:05, 21.19it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  42%|████▏     | 76/182 [00:03<00:04, 21.22it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  46%|████▌     | 83/182 [00:03<00:03, 26.72it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  47%|████▋     | 86/182 [00:03<00:03, 24.86it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  49%|████▉     | 89/182 [00:03<00:04, 21.93it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  51%|█████     | 92/182 [00:03<00:04, 21.15it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  55%|█████▌    | 101/182 [00:03<00:02, 33.76it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  64%|██████▍   | 117/182 [00:03<00:01, 60.74it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  82%|████████▏ | 149/182 [00:04<00:00, 121.39it/s]�[A
[Tiled VAE]: Executing Encoder Task Queue:  90%|█████████ | 164/182 [00:04<00:00, 94.79it/s] �[A
[Tiled VAE]: Executing Encoder Task Queue: 100%|██████████| 182/182 [00:04<00:00, 41.48it/s]
[Tiled VAE]: Done in 5.098s, max VRAM alloc 17733.351 MB
  0%|          | 0/1 [00:00<?, ?it/s]�[A
MultiDiffusion Sampling:   5%|▌         | 1/20 [00:06<02:06,  6.66s/it]
0%|          | 0/1 [00:04<?, ?it/s]
Total progress:   0%|          | 0/1 [00:00<?, ?it/s]�[A
Total progress: 100%|██████████| 1/1 [00:00<00:00,  1.82it/s]�[A
Total progress: 100%|██████████| 1/1 [00:00<00:00,  1.82it/s]
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/cog/server/worker.py", line 217, in _predict
result = predict(**payload)
File "/src/predict.py", line 234, in predict
resp = self.api.img2imgapi(req)
File "/src/modules/api/api.py", line 445, in img2imgapi
processed = process_images(p)
File "/src/modules/processing.py", line 734, in process_images
res = process_images_inner(p)
File "/src/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 41, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "/src/modules/processing.py", line 869, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "/src/extensions/sd-webui-controlnet/scripts/hook.py", line 438, in process_sample
return process.sample_before_CN_hack(*args, **kwargs)
File "/src/modules/processing.py", line 1528, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "/src/modules/sd_samplers_kdiffusion.py", line 188, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "/src/modules/sd_samplers_common.py", line 261, in launch_sampling
return func()
File "/src/modules/sd_samplers_kdiffusion.py", line 188, in <lambda>
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "/root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/src/repositories/k-diffusion/k_diffusion/sampling.py", line 701, in sample_dpmpp_3m_sde
h_1, h_2 = h, h_1
UnboundLocalError: local variable 'h' referenced before assignment

How to run on personal computer?

Hi, I am trying to run this but all I get is an automatic1111 screen. There seems to be no instructions on how to run the upscaler.
Can you include the exact steps or provide a simple python script for this?

I am new for cog, why I get this?

=> CACHED [stage-1 6/11] RUN --mount=type=bind,from=deps,source=/dep,target=/dep cp -rf /dep/* $(pyenv prefix)/lib/python*/site-packages || true 0.0s
=> CACHED [stage-1 7/11] RUN git config --global --add safe.directory /src 0.0s
=> CACHED [stage-1 8/11] RUN git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui /stable-diffusion-webui && cd /stable-diffusion-webui && git checkout 310d6b9075c6edb3b884bd2a41 0.0s
=> CACHED [stage-1 9/11] RUN git clone https://github.com/LLSean/cog-A1111-webui /cog-sd-webui 0.0s
=> CACHED [stage-1 10/11] RUN python /cog-sd-webui/init_env.py --skip-torch-cuda-test 0.0s
=> CACHED [stage-1 11/11] WORKDIR /src 0.0s
=> preparing layers for inline cache 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:6143abae3dbac3d3e24884eed7e8c5893db329a2ee5d06fd0964a654322d9e24 0.0s
=> => naming to docker.io/library/cog-clarity-upscaler-base 0.0s

Starting Docker image cog-clarity-upscaler-base and running setup()...
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/cog/server/worker.py", line 189, in _setup
run_setup(self._predictor)
File "/root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/cog/predictor.py", line 70, in run_setup
predictor.setup()
File "/src/predict.py", line 22, in setup
initialize.imports()
File "/src/modules/initialize.py", line 24, in imports
from modules import paths, timer, import_hook, errors # noqa: F401
File "/src/modules/paths.py", line 34, in
assert sd_path is not None, f"Couldn't find Stable Diffusion in any of: {possible_sd_paths}"
AssertionError: Couldn't find Stable Diffusion in any of: ['/src/repositories/stable-diffusion-stability-ai', '.', '/']

How to upscale an image?

I installed with running these commands in an empty directory...

git clone https://github.com/philz1337x/clarity-upscaler
cd clairty-upscaler
webui.bat

Then launch the UI. It seems like a clone of Stable Diffusion Web UI (aka auto11111)?

How exactly do I use clarity to upscale and image once the UI opens?

image

What are the right steps to use these parameters on sd webui?

I think something went wrong in one of the steps, I found that the effect is very different from your demonstration.
prompt:
masterpiece, best quality, highres, lora:more_details:0.5 lora:SDXLrender_v2.0:1
Negative prompt: (worst quality, low quality, normal quality:2) JuggernautNegative-neg
Steps: 18, Sampler: DPM++ 3M SDE Karras, CFG scale: 6, Seed: 1337, Size: 792x1000, Model hash: 338b85bc4f, Model: juggernaut_reborn, Denoising strength: 0.35, Tiled Diffusion upscaler: 4xUltrasharp_4xUltrasharpV10, Tiled Diffusion scale factor: 2, ControlNet 0: "Module: tile_resample, Model: control_v11f1e_sd15_tile [a371b31b], Weight: 0.6, Resize Mode: 1, Low Vram: False, Processor Res: 512, Threshold A: 1, Threshold B: 1, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: 1, Hr Option: HiResFixOption.BOTH, Save Detected Map: True", Lora hashes: "more_details: 3b8aa1d351ef, SDXLrender_v2.0: 3925cf4759af", Version: v1.8.0

image image image

model: juggernaut_reborn
loras: more_details, SDXLrender
embeddings: JuggernautNegative-neg
extensions:
Tiled Diffusion
controlnet model: control_v11f1e_sd15_tile

video

Hi is there anyway to run this on a video?

Bug report on clarity over replicate

Hi.

For clarity on replicate.

When I use a large image, I have rainbow lines at the border of the tiles. Every time.
Easy to reveal by adding a little brightness.
Unusable result.
I noticed this after doing 3 photo edits...
See cut example:
https://ibb.co/ccxFmvQ

Have a nice day.
Cranavis

Scenario questions for the use of the model

I tested some examples and found that this model works better for zooming in on real images, but with anime, certain styles of paintings, it zooms in and changes a lot.
Is this the case?

ValueError: 'tiled diffusion' is not in list

thank you for your great work
I got an error when running predict.py:
Try to disable xformers, but it is not enabled. Skipping...
Style database not found: /content/clarity-upscaler/styles.csv
Loading weights [None] from /content/clarity-upscaler/models/Stable-diffusion/epicrealism_naturalSinRC1VAE.safetensors
Creating model from config: /content/clarity-upscaler/configs/v1-inference.yaml
fatal: No names found, cannot describe anything.
====>scripts:[<img2imgalt.py.Script object at 0x7d4488a486d0>, <loopback.py.Script object at 0x7d4488a48a30>, <outpainting_mk_2.py.Script object at 0x7d4488a48a60>, <poor_mans_outpainting.py.Script object at 0x7d4488a48ac0>, <prompt_matrix.py.Script object at 0x7d4488a48b20>, <prompts_from_file.py.Script object at 0x7d4488a48b50>, <sd_upscale.py.Script object at 0x7d4488a49240>, <xyz_grid.py.Script object at 0x7d4488a492a0>, <extra_options_section.py.ExtraOptionsSection object at 0x7d4488a492d0>, <hypertile_script.py.ScriptHypertile object at 0x7d4488a49300>, <refiner.py.ScriptRefiner object at 0x7d4488a49330>, <seed.py.ScriptSeed object at 0x7d4488a49360>]
Traceback (most recent call last):
File "/content/clarity-upscaler/modules/api/api.py", line 41, in script_name_to_index
return [script.title().lower() for script in scripts].index(name.lower())
ValueError: 'tiled diffusion' is not in list

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/content/clarity-upscaler/predict.py", line 331, in
predictor.setup()
File "/content/clarity-upscaler/predict.py", line 127, in setup
self.api.img2imgapi(req)
File "/content/clarity-upscaler/modules/api/api.py", line 426, in img2imgapi
script_args = self.init_script_args(img2imgreq, self.default_script_arg_img2img, selectable_scripts, selectable_script_idx, script_runner)
File "/content/clarity-upscaler/modules/api/api.py", line 331, in init_script_args
alwayson_script = self.get_script(alwayson_script_name, script_runner)
File "/content/clarity-upscaler/modules/api/api.py", line 298, in get_script
script_idx = script_name_to_index(script_name, script_runner.scripts)
File "/content/clarity-upscaler/modules/api/api.py", line 43, in script_name_to_index
raise HTTPException(status_code=422, detail=f"Script '{name}' not found") from e
fastapi.exceptions.HTTPException
Loading VAE weights from commandline argument: models/VAE/vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: xformers... done.
Model loaded in 3.9s (load weights from disk: 0.9s, create model: 1.2s, apply weights to model: 1.0s, load VAE: 0.2s, calculate empty prompt: 0.4s).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.