GithubHelp home page GithubHelp logo

Comments (19)

hlky avatar hlky commented on July 28, 2024 1

xl-base
xl-refiner
It is a different architecture. Refiner modules will be available soon.

from ait.

hlky avatar hlky commented on July 28, 2024 1

Please refer to the documentation and comments on related issues for compilation instructions. Precompiled refiner modules will be available soon, please be patient.

from ait.

CyberTimon avatar CyberTimon commented on July 28, 2024

After renaming these suggested names, I get this error:

Found 3 modules for linux xl sm80 1 1024 unet
Using b5caabe98aeb69bada9d1566c897aed66a84d4fb21f31482160d7ef9987f04fd
  0%|                                                                                                                                                                                | 0/20 [00:00<?, ?it/s]Got cutlass error: Error InternalError: Got cutlass error: Error Internal
  0%|                                                                                                                                                                                | 0/20 [00:00<?, ?it/s]
!!! Exception during processing !!!
Traceback (most recent call last):
  File "/home/cybertimon/Repositories/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "/home/cybertimon/Repositories/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "/home/cybertimon/Repositories/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "/home/cybertimon/Repositories/ComfyUI/nodes.py", line 1206, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "/home/cybertimon/Repositories/ComfyUI/custom_nodes/AIT/AITemplate/AITemplate.py", line 170, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "/home/cybertimon/Repositories/ComfyUI/custom_nodes/AIT/AITemplate/AITemplate.py", line 304, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "/home/cybertimon/Repositories/ComfyUI/comfy/samplers.py", line 716, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
  File "/home/cybertimon/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/cybertimon/Repositories/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "/home/cybertimon/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/cybertimon/Repositories/ComfyUI/comfy/samplers.py", line 319, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
  File "/home/cybertimon/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/cybertimon/Repositories/ComfyUI/comfy/k_diffusion/external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/home/cybertimon/Repositories/ComfyUI/comfy/k_diffusion/external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/home/cybertimon/Repositories/ComfyUI/comfy/samplers.py", line 307, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
  File "/home/cybertimon/Repositories/ComfyUI/comfy/samplers.py", line 285, in sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
  File "/home/cybertimon/Repositories/ComfyUI/comfy/samplers.py", line 262, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
  File "/home/cybertimon/Repositories/ComfyUI/custom_nodes/AIT/AITemplate/ait/inference.py", line 43, in apply_model
    return unet_inference(
  File "/home/cybertimon/Repositories/ComfyUI/custom_nodes/AIT/AITemplate/ait/inference.py", line 98, in unet_inference
    exe_module.run_with_tensors(inputs, ys, graph_mode=False)
  File "/home/cybertimon/Repositories/ComfyUI/custom_nodes/AIT/AITemplate/ait/module/model.py", line 565, in run_with_tensors
    outputs_ait = self.run(
  File "/home/cybertimon/Repositories/ComfyUI/custom_nodes/AIT/AITemplate/ait/module/model.py", line 468, in run
    return self._run_impl(
  File "/home/cybertimon/Repositories/ComfyUI/custom_nodes/AIT/AITemplate/ait/module/model.py", line 407, in _run_impl
    self.DLL.AITemplateModelContainerRun(
  File "/home/cybertimon/Repositories/ComfyUI/custom_nodes/AIT/AITemplate/ait/module/model.py", line 211, in _wrapped_func
    raise RuntimeError(f"Error in function: {method.__name__}")
RuntimeError: Error in function: AITemplateModelContainerRun

from ait.

CyberTimon avatar CyberTimon commented on July 28, 2024

Maybe we should delete the AIT extension and reinstall it. Then you could make a step by step tutorial what changes needs to be done to get it to work with SDXL. Only if you have time of course.

from ait.

hlky avatar hlky commented on July 28, 2024

Yes there appears to be an issue with the Linux modules. Linux users will need to compile their own or try Windows until I have time to look into the issue. To note it is known that modules compiled on one distro may not work on another.

from ait.

CyberTimon avatar CyberTimon commented on July 28, 2024

ohh so I have to compile them on my buggy env 😆

from ait.

CyberTimon avatar CyberTimon commented on July 28, 2024

The strange part is that the other user on windows has the same input0 issue which I think is not related to the model. The SD 1.5 linux unet models work by the way on my system so I don't see a issue why it shouldn't work.

from ait.

hlky avatar hlky commented on July 28, 2024

No, that is not strange, it is the same, the topic of this issue is in fact regarding the name changes that apply to all recent modules, this includes Linux and Windows.

Use #12 for your comments related to Linux compilation, to avoid busying this thread. The issue is likely because I used my Windows specific build and it needs a fresh install instead.
Otherwise, be patient and wait for Linux modules.

from ait.

Shaistrong avatar Shaistrong commented on July 28, 2024
  • input0 -> latent_model_input
  • input1 -> timesteps
  • input2 -> encoder_hidden_states
  • input3 -> class_labels (unused by this plugin)

I don't really understand this. the lines you mentioned already have: "input0": latent_model_input.permute((0, 2, 3, 1)) which I think is what you said is necessary..?

from ait.

hlky avatar hlky commented on July 28, 2024

I'm not sure what is unclear, the current name on the line linked is to be changed to the new name
Current name -> New name
That section is regarding UNet, it says input0 is renamed to latent_model_input, input1 is renamed to timesteps, input2 is renamed to encoder_hidden_states and you can safely ignore input3.
The new modules use the new name, to use them the input names in the code need to match the names used by the module.
This is described in the initial post.

from ait.

Shaistrong avatar Shaistrong commented on July 28, 2024

I'm not sure what is unclear, the current name on the line linked is to be changed to the new name Current name -> New name That section is regarding UNet, it says input0 is renamed to latent_model_input, input1 is renamed to timesteps, input2 is renamed to encoder_hidden_states and you can safely ignore input3. The new modules use the new name, to use them the input names in the code need to match the names used by the module. This is described in the initial post.

that did it!! it finally worked =D

from ait.

Shaistrong avatar Shaistrong commented on July 28, 2024

okay, now it does the first stage perfectly with the speed boost, but when starting second stage it says in the terminal: [20:42:19]: NVIDIA GeForce RTX 4070 Ti [20:42:19]: AITemplate modules compiled by hlky [20:42:20]: Error: SetConstant did not receive correct number of bytes for unbound constant time_embedding_linear_1_weight: expected 819200 but got 1179648. Check that the provided tensor's shape is correct. @hlky

from ait.

hlky avatar hlky commented on July 28, 2024

I can't say for certain due to the limited information provided regarding the second stage, however the time_embedding_linear_1_weight is part of UNet, its size is determined block_out_channels, as the error states it got a larger size than expected I believe you are attempting to use XL refiner, which is a different architecture to the base model and therefore incompatible with the modules you are using. You can compile modules for refiner yourself, or await precompiled modules being provided.

from ait.

Shaistrong avatar Shaistrong commented on July 28, 2024

I can't say for certain due to the limited information provided regarding the second stage, however the time_embedding_linear_1_weight is part of UNet, its size is determined block_out_channels, as the error states it got a larger size than expected I believe you are attempting to use XL refiner, which is a different architecture to the base model and therefore incompatible with the modules you are using. You can compile modules for refiner yourself, or await precompiled modules being provided.

oh, gotcha. for some reason I can't compile them myself, so I'll wait

from ait.

Shaistrong avatar Shaistrong commented on July 28, 2024

I can't say for certain due to the limited information provided regarding the second stage, however the time_embedding_linear_1_weight is part of UNet, its size is determined block_out_channels, as the error states it got a larger size than expected I believe you are attempting to use XL refiner, which is a different architecture to the base model and therefore incompatible with the modules you are using. You can compile modules for refiner yourself, or await precompiled modules being provided.

wait, are you sure it's a different architecture? I thought modules.json just didn't specify the refiner modules

from ait.

Shaistrong avatar Shaistrong commented on July 28, 2024

xl-base xl-refiner It is a different architecture. Refiner modules will be available soon.

do you need to change the compilation script again for the refiner? or is it universal now?

from ait.

hlky avatar hlky commented on July 28, 2024

No change required, just use --hf-hub-or-path stabilityai/stable-diffusion-xl-refiner-1.0 instead of --hf-hub-or-path stabilityai/stable-diffusion-xl-base-1.0.

from ait.

Shaistrong avatar Shaistrong commented on July 28, 2024

No change required, just use --hf-hub-or-path stabilityai/stable-diffusion-xl-refiner-1.0 instead of --hf-hub-or-path stabilityai/stable-diffusion-xl-base-1.0.

is this with the built AIT dir you linked yesterday? if yes what is the directory I should run this?

from ait.

hlky avatar hlky commented on July 28, 2024

Closing this as #19 is nearing completion.

from ait.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.