Comments (20)
Yes the module can provide maximum output shape. Input names are available, and maximum input shape is exported from the module but the code for maximum input shape is not present in the interface, I can add this if required
Get outputs with:
_output_name_to_index
/_construct_output_name_to_index_map
Get maximum shape with name or index:
get_output_maximum_shape
module = ...
outputs = module._output_name_to_index # or module._construct_output_name_to_index_map()
for name, idx in outputs.items():
shape = module.get_output_maximum_shape(idx)
print(name, shape)
from ait.
Yes the module can provide maximum output shape. Input names are available, and maximum input shape is exported from the module but the code for maximum input shape is not present in the interface, I can add this if required
Get outputs with:
_output_name_to_index
/_construct_output_name_to_index_map
Get maximum shape with name or index:
get_output_maximum_shape
module = ... outputs = module._output_name_to_index # or module._construct_output_name_to_index_map() for name, idx in outputs.items(): shape = module.get_output_maximum_shape(idx) print(name, shape)
Is it correct that the size is limited to multiples of 64, ranging from a minimum of 64 to a specific maximum size?
from ait.
Current modules support 8px increment.
from ait.
Size reported by module for unet is in latent size, so multiply or divide by 8 if you're working with pixel size.
For vae decode, the output size reported is in pixels.
For vae encode, the output size reported is in latent.
from ait.
Size reported by module for unet is in latent size, so multiply or divide by 8 if you're working with pixel size. For vae decode, the output size reported is in pixels. For vae encode, the output size reported is in latent.
Based on a quick test, it appears that pixel size of the image doesn't need to be a concern. Since it gets truncated to a multiple of 8 when converted to latent, I don't need to worry about it.
from ait.
Yes the module can provide maximum output shape. Input names are available, and maximum input shape is exported from the module but the code for maximum input shape is not present in the interface, I can add this if required
Get outputs with:
_output_name_to_index
/_construct_output_name_to_index_map
Get maximum shape with name or index:get_output_maximum_shape
module = ... outputs = module._output_name_to_index # or module._construct_output_name_to_index_map() for name, idx in outputs.items(): shape = module.get_output_maximum_shape(idx) print(name, shape)
Should the size of the manageable max resolution in the module be obtained from the name? Anyway, if AITemplateLoader pass the max resolution
to a place like model_options
, it seems like it would be very helpful.
from ait.
Yes get_output_maximum_shape
also works with names.
The AITemplateLoader does not actually load the module, it only signals to sampler that it should use AIT, loading unet modules happens in sample and module is selected based on sizes etc
Could you share more details of your use case? Maybe you can just detect the module to use from within your node
from ait.
In the detailer, a specific part of the image is upscaled and then encoded using VAE for KSample.
If the upscale size exceeds a certain limit, an error occurs. I'm trying to impose an additional restriction here in the form of a maximum resolution.
When the model is passed to the Detailer and resolution constraints are enforced by the model, having the maximum resolution information somewhere within the model would allow me to utilize this information.
from ait.
This plugin overrides comfy.sample.sample
, AITemplateLoader
only adds a flag that AITemplate is used and the override version of sample
detects this flag then selects an appropriate module.
For vae the module selection happens within the node. There is no module passed from AITemplateLoader, and no module is loaded at that point of execution.
The vae encode you mentioned I assume is here, this would need code from the ait vae encode node, the code from that node selects the module based on the input shape.
Sample in the detailer node is here, as far as I know if this plugin is installed then any other nodes should have comfy.sample.sample
overridden, with AITemplateLoader connected to the MODEL
any third party node's usage of KSampler would use AIT and the module selection would be automatic.
So while I understand you would use the maximum shape as a restriction on resolution, I'm not sure why any restriction need apply.
If you could share any links to relevant code sections, details on how you're integrating, etc and with regards to "an error occurs"
, what is the error that occurs?
from ait.
When I tried just simple T2I with 1024x1024 on SD1.5 model. The generation is failed.
This issue will break Detailer's behavior.
I need to restrict upscale size to avoid this situation.
from ait.
Could you please share any relevant code sections where you are attempting to integrate, and any errors you are receiving?
from ait.
Could you please share any relevant code sections where you are attempting to integrate, and any errors you are receiving?
I thought this error is normal behavior.
Since 1024x1024 caused an error in SD1.5, and there were no issues even with 2048x2048 in SDXL, I was searching for a basis on which to establish the max resolution setting.
!!! Exception during processing !!!
Traceback (most recent call last):
File "/home/rho/git/ComfyUI/worklist_execution.py", line 42, in exception_helper
task()
File "/home/rho/git/ComfyUI/worklist_execution.py", line 254, in task
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/execution.py", line 97, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/execution.py", line 90, in map_node_over_list
results.append(getattr(obj, func)(**params))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/nodes.py", line 1206, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/AITemplate.py", line 175, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/AITemplate.py", line 308, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/comfy/samplers.py", line 720, in sample
samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/comfy/samplers.py", line 323, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/comfy/k_diffusion/external.py", line 125, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/comfy/k_diffusion/external.py", line 151, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/comfy/samplers.py", line 311, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/comfy/samplers.py", line 289, in sampling_function
cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/comfy/samplers.py", line 263, in calc_cond_uncond_batch
output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/ait/inference.py", line 43, in apply_model
return unet_inference(
^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/ait/inference.py", line 98, in unet_inference
exe_module.run_with_tensors(inputs, ys, graph_mode=False)
File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/ait/module/model.py", line 566, in run_with_tensors
outputs_ait = self.run(
^^^^^^^^^
File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/ait/module/model.py", line 469, in run
return self._run_impl(
^^^^^^^^^^^^^^^
File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/ait/module/model.py", line 408, in _run_impl
self.DLL.AITemplateModelContainerRun(
File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/ait/module/model.py", line 212, in _wrapped_func
raise RuntimeError(f"Error in function: {method.__name__}")
RuntimeError: Error in function: AITemplateModelContainerRun
from ait.
I am currently trying to incorporate size restrictions here.
from ait.
There will be some additional error message above the traceback.
The maximum you should set there would be 4096, at this is the largest module supported currently. Other than that, you should not need to set restrictions based on loaded module, because module selection is automatic, and AITemplateLoader does not pass the module itself, only a flag that AITemplate should be used.
vae encode here can use code from AIT vae encode, then module selection is automatic
sample here should be using overridden, comfy.sample.sample
, and use AIT if AITemplateLoader
is connected to the nodes' MODEL
input, and module is selected based on input shape.
from ait.
There will be some additional error message above the traceback.
The maximum you should set there would be 4096, at this is the largest module supported currently. Other than that, you should not need to set restrictions based on loaded module, because module selection is automatic, and AITemplateLoader does not pass the module itself, only a flag that AITemplate should be used.
vae encode here can use code from AIT vae encode, then module selection is automatic sample here should be using overridden,
comfy.sample.sample
, and use AIT ifAITemplateLoader
is connected to the nodes'MODEL
input, and module is selected based on input shape.
So, if setting a resolution exceeding 768 results in KSample failure, that wouldn't be considered a normal situation, right?
Found 4 modules for linux v1 sm80 1 776 unet
Using 1430bb4e84b5b53befc0bf8e12d25cdd65720f16505f20287f739625f5c89a51
Error: [SetValue] Dimension got value out of bounds; expected value to be in [1, 96], but got 97.
0%| | 0/20 [00:00<?, ?it/s]
!!! Exception during processing !!!
Traceback (most recent call last):
from ait.
Thank you for providing the error message, it is important to provide all error messages to assist in diagnosing the issue. linux/sm80/bs1/768/unet_v1_768.so.xz had the same sha256 as 1024, this resulted in 768 being selected instead of 1024.
from ait.
Thank you for providing the error message, it is important to provide all error messages to assist in diagnosing the issue. linux/sm80/bs1/768/unet_v1_768.so.xz had the same sha256 as 1024, this resulted in 768 being selected instead of 1024.
Oh... There was an issue with module selection due to hash collisions. I misunderstood that error as a limitation of the AIT approach. Thx.
from ait.
Ensure you delete the current file otherwise the correct module will not download.
from ait.
Ensure you delete the current file otherwise the correct module will not download.
Oh.. should I delete that file?
from ait.
unet_v1_768.so.xz
It works well :)
from ait.
Related Issues (20)
- Compilation fails after fb84060 HOT 2
- ""log_vml_cpu" not implemented for 'Half'" when using AITemplate on certain models
- module 'comfy.sd' has no attribute 'ModelPatcher' HOT 1
- Unable to compile modules after update HOT 2
- issue: compatibility patch required HOT 2
- Error Running Workflows HOT 3
- [Bug] ddim, uni_pc, uni_pc_bh2
- [Bug] Conditioning Combine node error
- [Bug] ControlNet issues
- [Bug] LoRA issues
- [Bug] Batch size > 1
- [Bug] LoRAs handled incorrectly
- Issues with Inpainting pipeline
- FreeU support?
- Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! HOT 9
- After the last update, an error occurred. HOT 1
- Missing Nodes after updating HOT 3
- AITemplate -v?
- AITemplate if-else
- Cannot import HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ait.