GithubHelp home page GithubHelp logo

williamyang1991 / rerender_a_video Goto Github PK

View Code? Open in Web Editor NEW
2.9K 23.0 196.0 9.1 MB

[SIGGRAPH Asia 2023] Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation

Home Page: https://www.mmlab-ntu.com/project/rerender/

License: Other

Python 6.07% Jupyter Notebook 93.93%
controlnet diffusion video-processing

rerender_a_video's People

Contributors

sammcj avatar singlezombie avatar stormcenter avatar williamyang1991 avatar wladradchenko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rerender_a_video's Issues

Immediately after opening the web UI issues an error (Gradio Error)

To create a public link, set share=True in launch().
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in call
return await self.app(scope, receive, send)
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\fastapi\applications.py", line 292, in call
await super().call(scope, receive, send)
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\middleware\errors.py", line 184, in call
raise exc
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\middleware\errors.py", line 162, in call
await self.app(scope, receive, _send)
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\middleware\cors.py", line 83, in call
await self.app(scope, receive, send)
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in call
raise e
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in call
await self.app(scope, receive, send)
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\routing.py", line 718, in call
await route.handle(scope, receive, send)
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\fastapi\routing.py", line 273, in app
raw_response = await run_endpoint_function(
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\fastapi\routing.py", line 192, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\routes.py", line 282, in api_info
return gradio.blocks.get_api_info(config, serialize) # type: ignore
File "C:\Users\mrjek\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 504, in get_api_info
serializer = serializing.COMPONENT_MAPPINGtype
KeyError: 'dataset'

Released an auto installer hopefully a full public tutorial coming soon

output.mp4

1 click Auto installer : https://www.patreon.com/posts/1-click-auto-for-89457537

Auto installer made for Windows. You only need to have Python and Git installed. Instructions are shared

Hopefully a free public full tutorial coming soon on SECourses stay subscribed > https://www.youtube.com/SECourses

Twitter sharing : https://twitter.com/GozukaraFurkan/status/1703797121126662602

Source repo : https://github.com/williamyang1991/Rerender_A_Video

Just uploaded original video to web ui : https://twitter.com/RishiSunak/status/1702630698178236756

Used default settings, flat2DAnimerge_v30 model and just a man in a room prompt

For audio I used ffmpeg to copy it from original source to output

Spent more than 2 days to prepare installer because there were a lot of bugs and errors that I had to debug and figure out fix

image

image

What is GPU usage of Rerender_A_Video?

What is the GPU utilization of the process 'Rerender_A_Video'? Does it demand a substantial amount of VRAM, or is it sufficiently efficient for utilization on consumer-grade computers?

Is Ebsynth used in which parts?

I am trying to make a tutorial for this new release

Preparing an auto installer for windows

Installation of Ebsynth is really really hard

It requires specific visual studio version and adding cl.exe to path

Is it is necessary? In which part it is used?

Of course i can install but general public will have hard time

And which CUDA toolkit we need for it? Any version works?

G:\Rerender_A_Video auto install\Rerender_A_Video>python install2.py
Build Ebsynth Windows 64 bit. If you want to build for 32 bit, please modify install.py.
.\build-win64-cpu+cuda.bat
"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat"
nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
ebsynth.cpp
The contents of <filesystem> are available only with C++17 or later.
ebsynth_cpu.cpp
ebsynth_cuda.cu
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4\include\crt/host_config.h(160): fatal error C1189: #error:  -- unsupported Microsoft Visual Studio version! Only the versions between 2017 and 2019 (inclusive) are supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk.
FAILED
Failed to install Ebsynth.

Example error

Run - python3 rerender.py --cfg config/real2sculpture.json

Traceback (most recent call last): File "/Users/alishershermatov/Documents/Projects/ML:AI/Rerender_A_Video/rerender.py", line 17, in <module> import src.import_util # noqa: F401 ^^^^^^^^^^^^^^^^^^^^^^ File "/Users/alishershermatov/Documents/Projects/ML:AI/Rerender_A_Video/src/import_util.py", line 10, in <module> import deps.ControlNet.share # noqa: F401 E402 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/alishershermatov/Documents/Projects/ML:AI/Rerender_A_Video/deps/ControlNet/share.py", line 2, in <module> from cldm.hack import disable_verbosity, enable_sliced_attention File "/Users/alishershermatov/Documents/Projects/ML:AI/Rerender_A_Video/deps/ControlNet/cldm/hack.py", line 4, in <module> import ldm.modules.encoders.modules File "/Users/alishershermatov/Documents/Projects/ML:AI/Rerender_A_Video/deps/ControlNet/ldm/modules/encoders/modules.py", line 7, in <module> import open_clip File "/opt/homebrew/lib/python3.11/site-packages/open_clip/__init__.py", line 2, in <module> from .factory import list_models, create_model, create_model_and_transforms, add_model_config File "/opt/homebrew/lib/python3.11/site-packages/open_clip/factory.py", line 13, in <module> from .model import CLIP, convert_weights_to_fp16, resize_pos_embed File "/opt/homebrew/lib/python3.11/site-packages/open_clip/model.py", line 17, in <module> from .timm_model import TimmModel File "/opt/homebrew/lib/python3.11/site-packages/open_clip/timm_model.py", line 10, in <module> import timm File "/opt/homebrew/lib/python3.11/site-packages/timm/__init__.py", line 2, in <module> from .models import create_model, list_models, is_model, list_modules, model_entrypoint, \ File "/opt/homebrew/lib/python3.11/site-packages/timm/models/__init__.py", line 28, in <module> from .maxxvit import * File "/opt/homebrew/lib/python3.11/site-packages/timm/models/maxxvit.py", line 225, in <module> @dataclass ^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/dataclasses.py", line 1223, in dataclass return wrap(cls) ^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/dataclasses.py", line 1213, in wrap return _process_class(cls, init, repr, eq, order, unsafe_hash, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/dataclasses.py", line 958, in _process_class cls_fields.append(_get_field(cls, name, type, kw_only)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/dataclasses.py", line 815, in _get_field raise ValueError(f'mutable default {type(f.default)} for field ' ValueError: mutable default <class 'timm.models.maxxvit.MaxxVitConvCfg'> for field conv_cfg is not allowed: use default_factory

Python Version - 3.11.3
What version of Python is used in the project?
Machine - MacBook Pro (Apple M2 Max) Ventura 13.3.1

IndexError: list index out of range

I ran the command python .\rerender.py --input videos\video0.mov --output videos\lighty.mp4 --prompt "anime man" and it began processing frames but then partway through it crashed with the following output.
Global seed set to 59662 Data shape for DDIM sampling is (1, 4, 64, 112), eta 0.0 Running DDIM Sampling with 20 timesteps DDIM Sampler: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:08<00:00, 2.39it/s] Global seed set to 59662 Data shape for DDIM sampling is (1, 4, 64, 112), eta 0.0 Running DDIM Sampling with 20 timesteps DDIM Sampler: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00, 2.51it/s] Traceback (most recent call last): File ".\rerender.py", line 462, in <module> rerender(cfg, args.one, args.key_video_path) File ".\rerender.py", line 218, in rerender frame = cv2.imread(imgs[i + 1]) IndexError: list index out of range (rerender) PS D:\ml\apps\Rerender_A_Video>

FileNotFoundError: [Errno 2] No such file or directory: 'videos/pexels-antoni-shkraba-8048492-540x960-25fps/out_1/0002.bin'

Thanks for publishing your great work. I would highly appreciate it if you could help me with the following problem.

I was trying to run the example code and ran into this werid problem. The first two steps went perfectly fine, but the third command line 'python rerender.py --cfg config/van_gogh_man.json -nr' simply won't work. I figured out it was trying to read the image in path "videos/pexels-antoni-shkraba-8048492-540x960-25fps/out_1/0002.jpg" and similarly in 'videos/pexels-antoni-shkraba-8048492-540x960-25fps/out_31/0032.jpg'. but the file simply doesn't exist there.
I wonder what went wrong? Could it be in the first two steps?

terminal output
No module 'xformers'. Proceeding without it.
python video_blend.py videos/pexels-antoni-shkraba-8048492-540x960-25fps --beg 1 --end 101 --itv 10 --key keys --output videos/pexels-antoni-shkraba-8048492-540x960-25fps/blend.mp4 --fps 25.0 --n_proc 4 -ps
/home/smbu/anaconda3/envs/rerender/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/TensorShape.cpp:2894.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/home/smbu/anaconda3/envs/rerender/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/TensorShape.cpp:2894.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/home/smbu/anaconda3/envs/rerender/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/TensorShape.cpp:2894.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/home/smbu/anaconda3/envs/rerender/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/TensorShape.cpp:2894.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
videos/pexels-antoni-shkraba-8048492-540x960-25fps/out_31/0031.jpg
videos/pexels-antoni-shkraba-8048492-540x960-25fps/out_81/0081.jpg
videos/pexels-antoni-shkraba-8048492-540x960-25fps/out_1/0001.jpg
videos/pexels-antoni-shkraba-8048492-540x960-25fps/out_31/0032.jpg
Process Process-2:
Traceback (most recent call last):
File "/home/smbu/anaconda3/envs/rerender/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/smbu/anaconda3/envs/rerender/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/smbu/mediadisk/Rerender_A_Video/video_blend.py", line 110, in process_sequences
process_one_sequence(i, video_sequence)
File "/home/smbu/mediadisk/Rerender_A_Video/video_blend.py", line 97, in process_one_sequence
cmd += ' ' + g.get_cmd(j, w)
File "/home/smbu/mediadisk/Rerender_A_Video/blender/guide.py", line 101, in get_cmd
warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
File "/home/smbu/mediadisk/Rerender_A_Video/flow/flow_utils.py", line 188, in warp
if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
videos/pexels-antoni-shkraba-8048492-540x960-25fps/out_81/0082.jpg
Process Process-4:
Traceback (most recent call last):
File "/home/smbu/anaconda3/envs/rerender/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/smbu/anaconda3/envs/rerender/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/smbu/mediadisk/Rerender_A_Video/video_blend.py", line 110, in process_sequences
process_one_sequence(i, video_sequence)
File "/home/smbu/mediadisk/Rerender_A_Video/video_blend.py", line 97, in process_one_sequence
cmd += ' ' + g.get_cmd(j, w)
File "/home/smbu/mediadisk/Rerender_A_Video/blender/guide.py", line 101, in get_cmd
warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
File "/home/smbu/mediadisk/Rerender_A_Video/flow/flow_utils.py", line 188, in warp
if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
videos/pexels-antoni-shkraba-8048492-540x960-25fps/out_61/0061.jpg
videos/pexels-antoni-shkraba-8048492-540x960-25fps/out_1/0002.jpg
Process Process-1:
Traceback (most recent call last):
File "/home/smbu/anaconda3/envs/rerender/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/smbu/anaconda3/envs/rerender/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/smbu/mediadisk/Rerender_A_Video/video_blend.py", line 110, in process_sequences
process_one_sequence(i, video_sequence)
File "/home/smbu/mediadisk/Rerender_A_Video/video_blend.py", line 97, in process_one_sequence
cmd += ' ' + g.get_cmd(j, w)
File "/home/smbu/mediadisk/Rerender_A_Video/blender/guide.py", line 101, in get_cmd
warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
File "/home/smbu/mediadisk/Rerender_A_Video/flow/flow_utils.py", line 188, in warp
if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
videos/pexels-antoni-shkraba-8048492-540x960-25fps/out_61/0062.jpg
Process Process-3:
Traceback (most recent call last):
File "/home/smbu/anaconda3/envs/rerender/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/smbu/anaconda3/envs/rerender/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/smbu/mediadisk/Rerender_A_Video/video_blend.py", line 110, in process_sequences
process_one_sequence(i, video_sequence)
File "/home/smbu/mediadisk/Rerender_A_Video/video_blend.py", line 97, in process_one_sequence
cmd += ' ' + g.get_cmd(j, w)
File "/home/smbu/mediadisk/Rerender_A_Video/blender/guide.py", line 101, in get_cmd
warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
File "/home/smbu/mediadisk/Rerender_A_Video/flow/flow_utils.py", line 188, in warp
if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
ebsynth: 3.1687724590301514
Traceback (most recent call last):
File "video_blend.py", line 319, in
main(args)
File "video_blend.py", line 268, in main
process_seq(video_sequence, i, blend_histogram, blend_gradient)
File "video_blend.py", line 199, in process_seq
dist1s.append(load_error(bin_a, img_shape))
File "video_blend.py", line 160, in load_error
with open(bin_path, 'rb') as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'videos/pexels-antoni-shkraba-8048492-540x960-25fps/out_1/0002.bin'

关于单独使用Video_blend.py的问题

单独运行Video_blend.py的时候发生有关transformer.py第105行 scores += attn_mask.repeat(b_new, 1, 1)
发生attn_mask和scores不匹配的问题.

RuntimeError: The size of tensor a (8) must match the size of tensor b (64) at non-singleton dimension 0
Process Process-4:
Traceback (most recent call last):

有否有好的意见? 想单独使用Video blend的功能, 等待你们的跟进. 感谢你们的付出. 我尝试修改了一些代码部分, 但是确实还是没有避免错误的发生.

Support for Multi-Controlnet, and using separate images for controlnet

Thanks for the code, it's been working really well!

But I'd like to have further precision with multi-controlnet. (I am using 3D software to create a video first) So I want to use both Canny and Depth. I'd like to use Canny thru the input video and depth images that I rendered myself. How can I modify the code to do so?

x0_strength

What is "x0_strength" - is it equal Denoising strength?

How to add other ControlNets like Lineart_anime?

Hi!
It's not so clear how to add Lineart_anime or Depth_leres++ to WebUI.py.
This is a code should be changed:
"elif control_type == 'canny':
canny_detector = CannyDetector()"

In your example, you add:
"from ControlNet.annotator.midas import MidasDetector
...
elif control_type == 'depth':

173 | +  midas = MidasDetector()
174 | +
175 | +  def apply_midas(x):
176 | +  detected_map, _ = midas(x)
177 | +  return detected_map
178 | +
179 | +  self.detector = apply_midas
"
But how can we add some new ControlNet.annotators like a Lineart?
I compared the annotators code in my Automatic1111 build.
There is no such obvious class creation for Leres++ and Lineart as "class MidasDetector:" for Midas
(I mean the files "sd-webui-controlnet\annotator\lineart_anime\ init.py").

Or is there another way, easier?

After launching the web UI, it throws an error

H:\Rerender_A_Video>python webUI.py
Traceback (most recent call last):
File "H:\Rerender_A_Video\webUI.py", line 18, in
import src.import_util # noqa: F401
File "H:\Rerender_A_Video\src\import_util.py", line 10, in
import deps.ControlNet.share # noqa: F401 E402
ModuleNotFoundError: No module named 'deps.ControlNet.share'

module 'keras.backend' has no attribute 'is_tensor'

No module 'xformers'. Proceeding without it.
ControlLDM: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Loaded model config from [./deps/ControlNet/models/cldm_v15.yaml]
Loaded state_dict from [./models/control_sd15_canny.pth]
C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\safetensors\torch.py:98: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
with safe_open(filename, framework="pt", device=device) as f:
Traceback (most recent call last):
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\queueing.py", line 388, in call_prediction
output = await route_utils.call_process_api(
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1437, in process_api
result = await self.call_function(
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\utils.py", line 641, in wrapper
response = f(*args, **kwargs)
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "F:\renderer\Rerender_A_Video\webUI.py", line 286, in process1
img_ = numpy2tensor(img)
File "F:\renderer\Rerender_A_Video\src\img_util.py", line 23, in numpy2tensor
return einops.rearrange(x0, 'b h w c -> b c h w').clone()
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\einops\einops.py", line 425, in rearrange
return reduce(tensor, pattern, reduction='rearrange', **axes_lengths)
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\einops\einops.py", line 369, in reduce
return recipe.apply(tensor)
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\einops\einops.py", line 204, in apply
backend = get_backend(tensor)
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\einops_backends.py", line 49, in get_backend
if backend.is_appropriate_type(tensor):
File "C:\Users\Pond\AppData\Local\Programs\Python\Python310\lib\site-packages\einops_backends.py", line 513, in is_appropriate_type
return self.K.is_tensor(tensor) and self.K.is_keras_tensor(tensor)
AttributeError: module 'keras.backend' has no attribute 'is_tensor'
..........

Try ti reinstall kerra and TensorFlow many time no help

Example

Thank you for sharing. I have this up and running but would like some help to obtain a desirable image. Example: if I want a cartoon character, what parameters do I need to set? Just some basics, please and thank you.

AttributeError: 'NoneType' object has no attribute 'model'

Do gitpull for latest updates.
Start webUI.py
Click the topmost example in the Gradio interface
Click Run 1st Keyframe
Get this:

Traceback (most recent call last):
File "c:\python\lib\site-packages\gradio\queueing.py", line 388, in call_prediction
output = await route_utils.call_process_api(
File "c:\python\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
output = await app.get_blocks().process_api(
File "c:\python\lib\site-packages\gradio\blocks.py", line 1437, in process_api
result = await self.call_function(
File "c:\python\lib\site-packages\gradio\blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
File "c:\python\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "c:\python\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "c:\python\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "c:\python\lib\site-packages\gradio\utils.py", line 641, in wrapper
response = f(*args, **kwargs)
File "c:\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Oliver\Documents\Github\Rerender_A_Video\webUI.py", line 270, in process1
model = ddim_v_sampler.model
AttributeError: 'NoneType' object has no attribute 'model'

no attribute 'is_tensor'

Hi! When I run python rerender.py --cfg config/real2sculpture.json
I have this error:
File "...\Python\Python310\lib\site-packages\einops_backends.py", line 513, in is_appropriate_type
return self.K.is_tensor(tensor) and self.K.is_keras_tensor(tensor)
AttributeError: module 'keras.backend' has no attribute 'is_tensor'.

Audio

Thank you for providing Rerender_A_Video!

I've attempted to synchronize the audio and visuals in two different ways: first, by removing the audio, and second, by keeping it. However, in both cases, I've had to resync the audio as there are instances where the lip movements do not align with the audio. Should I manually adjust these synchronization issues within the software, or is there an alternative tool or feature available that can ensure the audio and body language are in sync?

Support for other Controlnets?!

That info is not complete enough to add the depth model. Add model loading options like elif control_type == 'depth': following
sure, but add what following?

It would be amazing to try different depth models as well, this surely would improve quality again a bit! :)

TypeError: Can't convert object to 'str' for 'filename'

I was able to run it with Python 3.8.5, but the slow generation speed was due to the absence of the 'xformers' package. For some reasons, I upgraded Python to version 3.10. With the new version, I was able to generate the first frame successfully. However, I encountered an error when generating the keyframes.

Traceback (most recent call last):
File "C:\Users\test\miniconda3\envs\rerender5\lib\site-packages\gradio\queueing.py", line 388, in call_prediction
output = await route_utils.call_process_api(
File "C:\Users\test\miniconda3\envs\rerender5\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\test\miniconda3\envs\rerender5\lib\site-packages\gradio\blocks.py", line 1437, in process_api
result = await self.call_function(
File "C:\Users\test\miniconda3\envs\rerender5\lib\site-packages\gradio\blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\test\miniconda3\envs\rerender5\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\test\miniconda3\envs\rerender5\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\test\miniconda3\envs\rerender5\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\test\miniconda3\envs\rerender5\lib\site-packages\gradio\utils.py", line 641, in wrapper
response = f(*args, **kwargs)
File "C:\Users\test\miniconda3\envs\rerender5\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "G:\AI\Rerender_A_Video\webui.py", line 404, in process2
frame = cv2.imread(cid)
TypeError: Can't convert object to 'str' for 'filename'

Dataclasses error when running on Windows

I'm trying to run the program on Windows 11, but it raises the following error:

Traceback (most recent call last):
  File "C:\Users\potra\Desktop\Work\AI\Rerender_A_Video\webUI.py", line 18, in <module>
    import src.import_util  # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\potra\Desktop\Work\AI\Rerender_A_Video\src\import_util.py", line 10, in <module>
    import deps.ControlNet.share  # noqa: F401 E402
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\potra\Desktop\Work\AI\Rerender_A_Video\deps\ControlNet\share.py", line 2, in <module>
    from cldm.hack import disable_verbosity, enable_sliced_attention
  File "C:\Users\potra\Desktop\Work\AI\Rerender_A_Video\deps/ControlNet\cldm\hack.py", line 4, in <module>
    import ldm.modules.encoders.modules
  File "C:\Users\potra\Desktop\Work\AI\Rerender_A_Video\deps/ControlNet\ldm\modules\encoders\modules.py", line 7, in <module>
    import open_clip
  File "C:\Users\potra\AppData\Local\Programs\Python\Python311\Lib\site-packages\open_clip\__init__.py", line 2, in <module>
    from .factory import list_models, create_model, create_model_and_transforms, add_model_config
  File "C:\Users\potra\AppData\Local\Programs\Python\Python311\Lib\site-packages\open_clip\factory.py", line 13, in <module>
    from .model import CLIP, convert_weights_to_fp16, resize_pos_embed
  File "C:\Users\potra\AppData\Local\Programs\Python\Python311\Lib\site-packages\open_clip\model.py", line 17, in <module>
    from .timm_model import TimmModel
  File "C:\Users\potra\AppData\Local\Programs\Python\Python311\Lib\site-packages\open_clip\timm_model.py", line 10, in <module>
    import timm
  File "C:\Users\potra\AppData\Local\Programs\Python\Python311\Lib\site-packages\timm\__init__.py", line 2, in <module>
    from .models import create_model, list_models, is_model, list_modules, model_entrypoint, \
  File "C:\Users\potra\AppData\Local\Programs\Python\Python311\Lib\site-packages\timm\models\__init__.py", line 28, in <module>
    from .maxxvit import *
  File "C:\Users\potra\AppData\Local\Programs\Python\Python311\Lib\site-packages\timm\models\maxxvit.py", line 225, in <module>
    @dataclass
     ^^^^^^^^^
  File "C:\Users\potra\AppData\Local\Programs\Python\Python311\Lib\dataclasses.py", line 1230, in dataclass
    return wrap(cls)
           ^^^^^^^^^
  File "C:\Users\potra\AppData\Local\Programs\Python\Python311\Lib\dataclasses.py", line 1220, in wrap
    return _process_class(cls, init, repr, eq, order, unsafe_hash,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\potra\AppData\Local\Programs\Python\Python311\Lib\dataclasses.py", line 958, in _process_class
    cls_fields.append(_get_field(cls, name, type, kw_only))
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\potra\AppData\Local\Programs\Python\Python311\Lib\dataclasses.py", line 815, in _get_field
    raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'timm.models.maxxvit.MaxxVitConvCfg'> for field conv_cfg is not allowed: use default_factory

I made sure to follow every step, including the installation requirements for Windows (Both installation steps ran smoothly)

Did I miss something ?

2 Major issues for Windows - Too many errors spent hours already to fix all - Pip freeze attached

Current gradio version is not working. So I had to update it to the latest now works

ebsynth compile not working. Please upload ebsynth compiled binary file directly into the repo or hugging face and make it auto download

I tried with cuda v11.4 and visual studio 2019 and all errors

And finally without ebsynth i did run an example video with default settings. I got keys frames but rest is all errors

Here below video process error

Global seed set to 2008357443
Data shape for DDIM sampling is (1, 4, 64, 72), eta 0.0
Running DDIM Sampling with 20 timesteps
DDIM Sampler: 100%|██████████| 20/20 [00:05<00:00,  3.92it/s]
python video_blend.py result\brad --beg 1 --end 311 --itv 10 --key keys  --output result\brad\blend.mp4 --fps 23.976023976023978 --n_proc 4 -ps
C:\Python3108\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3484.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
C:\Python3108\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3484.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
C:\Python3108\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3484.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
C:\Python3108\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3484.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_81\0082.jpg'): can't open/read file: check file path/integrity
Process Process-2:
Traceback (most recent call last):
  File "C:\Python3108\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Python3108\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 109, in process_sequences
    process_one_sequence(i, video_sequence)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 96, in process_one_sequence
    cmd += ' ' + g.get_cmd(j, w)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\blender\guide.py", line 100, in get_cmd
    warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\flow\flow_utils.py", line 188, in warp
    if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_241\0242.jpg'): can't open/read file: check file path/integrity
Process Process-4:
Traceback (most recent call last):
  File "C:\Python3108\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Python3108\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 109, in process_sequences
    process_one_sequence(i, video_sequence)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 96, in process_one_sequence
    cmd += ' ' + g.get_cmd(j, w)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\blender\guide.py", line 100, in get_cmd
    warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\flow\flow_utils.py", line 188, in warp
    if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_161\0162.jpg'): can't open/read file: check file path/integrity
Process Process-3:
Traceback (most recent call last):
  File "C:\Python3108\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Python3108\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 109, in process_sequences
    process_one_sequence(i, video_sequence)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 96, in process_one_sequence
    cmd += ' ' + g.get_cmd(j, w)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\blender\guide.py", line 100, in get_cmd
    warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\flow\flow_utils.py", line 188, in warp
    if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_1\0002.jpg'): can't open/read file: check file path/integrity
Process Process-1:
Traceback (most recent call last):
  File "C:\Python3108\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Python3108\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 109, in process_sequences
    process_one_sequence(i, video_sequence)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 96, in process_one_sequence
    cmd += ' ' + g.get_cmd(j, w)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\blender\guide.py", line 100, in get_cmd
    warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\flow\flow_utils.py", line 188, in warp
    if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
ebsynth: 6.116290330886841
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_1\0002.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_1\0003.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_1\0004.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_1\0005.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_1\0006.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_1\0007.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_1\0008.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_1\0009.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_1\0010.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_11\0011.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_11\0002.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_11\0003.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_11\0004.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_11\0005.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_11\0006.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_11\0007.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_11\0008.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_11\0009.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_11\0010.jpg'): can't open/read file: check file path/integrity
Traceback (most recent call last):
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 318, in <module>
    main(args)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 267, in main
    process_seq(video_sequence, i, blend_histogram, blend_gradient)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 198, in process_seq
    dist1s.append(load_error(bin_a, img_shape))
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 159, in load_error
    with open(bin_path, 'rb') as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'result\\brad\\out_1\\0002.bin'
Traceback (most recent call last):
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\venv\lib\site-packages\gradio\queueing.py", line 388, in call_prediction
    output = await route_utils.call_process_api(
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\venv\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
    output = await app.get_blocks().process_api(
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\venv\lib\site-packages\gradio\blocks.py", line 1440, in process_api
    data = self.postprocess_data(fn_index, result["prediction"], state)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\venv\lib\site-packages\gradio\blocks.py", line 1341, in postprocess_data
    prediction_value = block.postprocess(prediction_value)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\venv\lib\site-packages\gradio\components\video.py", line 281, in postprocess
    processed_files = (self._format_video(y), None)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\venv\lib\site-packages\gradio\components\video.py", line 355, in _format_video
    video = self.make_temp_copy_if_needed(video)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\venv\lib\site-packages\gradio\components\base.py", line 226, in make_temp_copy_if_needed
    temp_dir = self.hash_file(file_path)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\venv\lib\site-packages\gradio\components\base.py", line 190, in hash_file
    with open(file_path, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'result\\brad\\blend.mp4'
python video_blend.py result\brad --beg 1 --end 311 --itv 10 --key keys  --output result\brad\blend.mp4 --fps 23.976023976023978 --n_proc 4 -ps
C:\Python3108\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3484.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
C:\Python3108\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3484.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
C:\Python3108\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3484.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
C:\Python3108\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3484.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_81\0082.jpg'): can't open/read file: check file path/integrity
Process Process-2:
Traceback (most recent call last):
  File "C:\Python3108\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Python3108\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 109, in process_sequences
    process_one_sequence(i, video_sequence)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 96, in process_one_sequence
    cmd += ' ' + g.get_cmd(j, w)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\blender\guide.py", line 100, in get_cmd
    warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\flow\flow_utils.py", line 188, in warp
    if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_241\0242.jpg'): can't open/read file: check file path/integrity
Process Process-4:
Traceback (most recent call last):
  File "C:\Python3108\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Python3108\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 109, in process_sequences
    process_one_sequence(i, video_sequence)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 96, in process_one_sequence
    cmd += ' ' + g.get_cmd(j, w)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\blender\guide.py", line 100, in get_cmd
    warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\flow\flow_utils.py", line 188, in warp
    if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_161\0162.jpg'): can't open/read file: check file path/integrity
Process Process-3:
Traceback (most recent call last):
  File "C:\Python3108\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Python3108\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 109, in process_sequences
    process_one_sequence(i, video_sequence)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 96, in process_one_sequence
    cmd += ' ' + g.get_cmd(j, w)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\blender\guide.py", line 100, in get_cmd
    warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\flow\flow_utils.py", line 188, in warp
    if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_1\0002.jpg'): can't open/read file: check file path/integrity
Process Process-1:
Traceback (most recent call last):
  File "C:\Python3108\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Python3108\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 109, in process_sequences
    process_one_sequence(i, video_sequence)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\video_blend.py", line 96, in process_one_sequence
    cmd += ' ' + g.get_cmd(j, w)
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\blender\guide.py", line 100, in get_cmd
    warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
  File "G:\Rerender_A_Video auto install\Rerender_A_Video\flow\flow_utils.py", line 188, in warp
    if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
ebsynth: 4.1365814208984375
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('result\brad\out_1\


here below ebsynth build error

(venv) G:\Rerender_A_Video auto install\Rerender_A_Video>python install2.py
Build Ebsynth Windows 64 bit. If you want to build for 32 bit, please modify install.py.
.\build-win64-cpu+cuda.bat
"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat"
nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
ebsynth.cpp
The contents of <filesystem> are available only with C++17 or later.
ebsynth_cpu.cpp
ebsynth_cuda.cu
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\vcruntime.h(197): error: invalid redeclaration of type name "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\vcruntime_new.h(48): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\vcruntime_new.h(53): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\vcruntime_new.h(59): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\vcruntime_new.h(64): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\vcruntime_new.h(166): error: first parameter of allocation function must be of type "size_t"

Cannot clone repository recursively

PS C:\Users\Yuvraj\Desktop> git clone --recursive https://github.com/williamyang1991/Rerender_A_Video.git
Cloning into 'Rerender_A_Video'...
remote: Enumerating objects: 389, done.
remote: Counting objects: 100% (225/225), done.
remote: Compressing objects: 100% (106/106), done.
remote: Total 389 (delta 156), reused 173 (delta 119), pack-reused 164
Receiving objects: 100% (389/389), 9.00 MiB | 5.71 MiB/s, done.
Resolving deltas: 100% (222/222), done.
Submodule 'deps/ControlNet' ([email protected]:lllyasviel/ControlNet.git) registered for path 'deps/ControlNet'
Submodule 'deps/ebsynth' ([email protected]:SingleZombie/ebsynth.git) registered for path 'deps/ebsynth'
Submodule 'deps/gmflow' ([email protected]:haofeixu/gmflow.git) registered for path 'deps/gmflow'
Cloning into 'C:/Users/Yuvraj/Desktop/Rerender_A_Video/deps/ControlNet'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
fatal: clone of '[email protected]:lllyasviel/ControlNet.git' into submodule path 'C:/Users/Yuvraj/Desktop/Rerender_A_Video/deps/ControlNet' failed
Failed to clone 'deps/ControlNet'. Retry scheduled
Cloning into 'C:/Users/Yuvraj/Desktop/Rerender_A_Video/deps/ebsynth'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
fatal: clone of '[email protected]:SingleZombie/ebsynth.git' into submodule path 'C:/Users/Yuvraj/Desktop/Rerender_A_Video/deps/ebsynth' failed
Failed to clone 'deps/ebsynth'. Retry scheduled
Cloning into 'C:/Users/Yuvraj/Desktop/Rerender_A_Video/deps/gmflow'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
fatal: clone of '[email protected]:haofeixu/gmflow.git' into submodule path 'C:/Users/Yuvraj/Desktop/Rerender_A_Video/deps/gmflow' failed
Failed to clone 'deps/gmflow'. Retry scheduled
Cloning into 'C:/Users/Yuvraj/Desktop/Rerender_A_Video/deps/ControlNet'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
fatal: clone of '[email protected]:lllyasviel/ControlNet.git' into submodule path 'C:/Users/Yuvraj/Desktop/Rerender_A_Video/deps/ControlNet' failed
Failed to clone 'deps/ControlNet' a second time, aborting
PS C:\Users\Yuvraj\Desktop>````

IndexError: list index out of range

I get the same error as another user but with different error lines. After "Run 1st Key Frame" which works well, I get my style.
Then running "Run Key Frames" gives:

Traceback (most recent call last):
File "c:\python\lib\site-packages\gradio\queueing.py", line 388, in call_prediction
output = await route_utils.call_process_api(
File "c:\python\lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
output = await app.get_blocks().process_api(
File "c:\python\lib\site-packages\gradio\blocks.py", line 1437, in process_api
result = await self.call_function(
File "c:\python\lib\site-packages\gradio\blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
File "c:\python\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "c:\python\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "c:\python\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "c:\python\lib\site-packages\gradio\utils.py", line 641, in wrapper
response = f(*args, **kwargs)
File "c:\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Oliver\Documents\Github\Rerender_A_Video\webUI.py", line 550, in process2
samples, _ = ddim_v_sampler.sample(
File "c:\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Oliver\Documents\Github\Rerender_A_Video\src\ddim_v_hacked.py", line 212, in sample
samples, intermediates = self.ddim_sampling(
File "c:\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Oliver\Documents\Github\Rerender_A_Video\src\ddim_v_hacked.py", line 312, in ddim_sampling
weight = mask[i]
IndexError: list index out of range

And then I read something about dividing doing some frame math myself, but I don't know the formula/equation I should use for that.

Windows Error

C:\Users\xxxxxx\Desktop\xxxxder_A_Video>python webUI.py
logging improved.
Running on local URL: http://0.0.0.0:7860

To create a public link, set share=True in launch().
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 435, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\fastapi\applications.py", line 270, in call
await super().call(scope, receive, send)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\applications.py", line 124, in call
await self.middleware_stack(scope, receive, send)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\middleware\errors.py", line 184, in call
raise exc
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\middleware\errors.py", line 162, in call
await self.app(scope, receive, _send)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\middleware\cors.py", line 84, in call
await self.app(scope, receive, send)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\middleware\exceptions.py", line 75, in call
raise exc
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\middleware\exceptions.py", line 64, in call
await self.app(scope, receive, sender)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\routing.py", line 680, in call
await route.handle(scope, receive, send)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\routing.py", line 275, in handle
await self.app(scope, receive, send)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\routing.py", line 65, in app
response = await func(request)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\fastapi\routing.py", line 231, in app
raw_response = await run_endpoint_function(
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\fastapi\routing.py", line 162, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\routes.py", line 282, in api_info
return gradio.blocks.get_api_info(config, serialize) # type: ignore
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 504, in get_api_info
serializer = serializing.COMPONENT_MAPPINGtype
KeyError: 'dataset'

Is SDXL supported ?

I cant find the mention of SDXL. Is it supported and if not are you planning adding that compatibility ?
Thanks!

RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

Hello,I start webui sucessfully,and install all models,but I got some RuntimeERROR.
Maybe someone know how to solve the problem.
: )

File "C:\Users\PeanutArthur\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 282, in __init__ super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

屏幕截图 2023-09-25 174115

Openpose + face?

Hi
Is there any way to use Openpose or dwpose with face support + canny?
I think that output results could be greatly improved with this

Getting keyerror "dataset" when running the gradio ui

Traceback (most recent call last): File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi result = await app( # type: ignore[func-returns-value] File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__ return await self.app(scope, receive, send) File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/fastapi/applications.py", line 292, in __call__ await super().__call__(scope, receive, send) File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__ await self.middleware_stack(scope, receive, send) File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__ raise exc File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 83, in __call__ await self.app(scope, receive, send) File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__ raise exc File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__ await self.app(scope, receive, sender) File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__ raise e File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__ await self.app(scope, receive, send) File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__ await route.handle(scope, receive, send) File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle await self.app(scope, receive, send) File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/starlette/routing.py", line 66, in app response = await func(request) File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/fastapi/routing.py", line 273, in app raw_response = await run_endpoint_function( File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/fastapi/routing.py", line 192, in run_endpoint_function return await run_in_threadpool(dependant.call, **values) File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool return await anyio.to_thread.run_sync(func, *args) File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/gradio/routes.py", line 282, in api_info return gradio.blocks.get_api_info(config, serialize) # type: ignore File "/mnt/1c418dd9-a3c3-4278-870a-fe3a7069416b/github/Rerender_A_Video/.venv/lib/python3.10/site-packages/gradio/blocks.py", line 504, in get_api_info serializer = serializing.COMPONENT_MAPPING[type]() KeyError: 'dataset'

Unexpected artifacts of video_blender.py

Hi @williamyang1991 ,

Thanks for your impressive work! I am trying to use your video_blender.py to interpolate my own video. The following is my command

python video_blend.py '$dir' --itv 2 --key 'keys' --output '$folder_path/$subfolder_name.mp4' --fps 25.0 -ps --beg 0 --end 2

However, sometimes, it generates very weird artifacts. For my example, I have three source images 0000.png, 0001.png and 0002.png, two already edited key frames: key_0000.png and key_0002.png. I use the aforementioned script the generate the gen_0001.png. However, there are some unexpected artifacts around the shoulder of the man. I was wondering if you have encountered this kind of artifacts during your experiments. If yes, do you have solutions?

Experimental images can be downloaded here. Thank you!

Wont create the video

I follwed the instructions to the letter, infact I copied and pasted the comands into a command prompt.
git clone [email protected]:williamyang1991/Rerender_A_Video.git --recursive
cd Rerender_A_Video
conda env create -f environment.yml
conda activate rerender
python install.py

Then
python rerender.py --cfg config/van_gogh_man.json -one
python rerender.py --cfg config/van_gogh_man.json -nb

works, I get a first frame with edge detection and video frames and keyframes.

Then running
python rerender.py --cfg config/van_gogh_man.json -nr
Fails with error:

A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
logging improved.
python video_blend.py videos/pexels-antoni-shkraba-8048492-540x960-25fps --beg 1 --end 101 --itv 10 --key keys --output videos/pexels-antoni-shkraba-8048492-540x960-25fps/blend.mp4 --fps 25.0 --n_proc 4 -ps
C:\Program Files\Python310\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3484.)
return VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
C:\Program Files\Python310\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3484.)
return VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
C:\Program Files\Python310\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3484.)
return VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
C:\Program Files\Python310\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3484.)
return VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
[ WARN:[email protected]] global loadsave.cpp:2[4 4W AcRvN:::[email protected]]o dgelro biamlr elaoda_d(s'avvied.ecopsp/:p2e4x4e lcsv-:a:nftionndiD-eschokdrearb ai-m8r0e4a8d4_9(2'-v5i4d0exo9s6/0p-e2x5eflpss-\aonutto_n1i-0s0h0k2r.ajbpag-'8)0:4 8c4a9n2'-t5 4o0pxe9n6/0r-2e5afdp sf\ioluet:
3c1h\e0c0k3 2f.ijlpeg 'p)a:t hc/ainn'tte gorpietny/
reProcess Process-1:
ad file: check file path/integrity
Process Process-2:
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_81\0082.jpg'): can't open/read file: check file path/integrity
Process Process-4:
Traceback (most recent call last):
File "C:\Program Files\Python310\lib\multiprocessing\process.py", line 314, in bootstrap
self.run()
File "C:\Program Files\Python310\lib\multiprocessing\process.py", line 108, in run
self.target(*self.args, **self.kwargs)
File "D:\Ai art\Rerender_A_Video\video_blend.py", line 109, in process_sequences
process_one_sequence(i, video_sequence)
Traceback (most recent call last):
Traceback (most recent call last):
[ File "D:\Ai art\Rerender_A_Video\video_blend.py", line 96, in process_one_sequence
cmd += ' ' + g.get_cmd(j, w)
File "C:\Program Files\Python310\lib\multiprocessing\process.py", line 314, in bootstrap
self.run()
File "C:\Program Files\Python310\lib\multiprocessing\process.py", line 314, in bootstrap
self.run()
File "C:\Program Files\Python310\lib\multiprocessing\process.py", line 108, in run
self.target(*self.args, **self.kwargs)
File "D:\Ai art\Rerender_A_Video\blender\guide.py", line 100, in get_cmd
warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
File "C:\Program Files\Python310\lib\multiprocessing\process.py", line 108, in run
self.target(*self.args, **self.kwargs)
File "D:\Ai art\Rerender_A_Video\video_blend.py", line 109, in process_sequences
process_one_sequence(i, video_sequence)
File "D:\Ai art\Rerender_A_Video\video_blend.py", line 96, in process_one_sequence
cmd += ' ' + g.get_cmd(j, w)
File "D:\Ai art\Rerender_A_Video\blender\guide.py", line 100, in get_cmd
warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
File "D:\Ai art\Rerender_A_Video\flow\flow_utils.py", line 188, in warp
if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
File "D:\Ai art\Rerender_A_Video\flow\flow_utils.py", line 188, in warp
if len(img.shape) == 2:
W File "D:\Ai art\Rerender_A_Video\video_blend.py", line 109, in process_sequences
process_one_sequence(i, video_sequence)
AttributeError: 'NoneType' object has no attribute 'shape'
AR File "D:\Ai art\Rerender_A_Video\video_blend.py", line 96, in process_one_sequence
cmd += ' ' + g.get_cmd(j, w)
N:0 File "D:\Ai art\Rerender_A_Video\blender\guide.py", line 100, in get_cmd
warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
@ File "D:\Ai art\Rerender_A_Video\flow\flow_utils.py", line 188, in warp
if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
9.308] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_61\0062.jpg'): can't open/read file: check file path/integrity
Process Process-3:
Traceback (most recent call last):
File "C:\Program Files\Python310\lib\multiprocessing\process.py", line 314, in bootstrap
self.run()
File "C:\Program Files\Python310\lib\multiprocessing\process.py", line 108, in run
self.target(*self.args, **self.kwargs)
File "D:\Ai art\Rerender_A_Video\video_blend.py", line 109, in process_sequences
process_one_sequence(i, video_sequence)
File "D:\Ai art\Rerender_A_Video\video_blend.py", line 96, in process_one_sequence
cmd += ' ' + g.get_cmd(j, w)
File "D:\Ai art\Rerender_A_Video\blender\guide.py", line 100, in get_cmd
warped_img = flow_calc.warp(prev_img, self.flows[i - 1],
File "D:\Ai art\Rerender_A_Video\flow\flow_utils.py", line 188, in warp
if len(img.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'
ebsynth: 10.451462984085083
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_1\0002.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_1\0003.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_1\0004.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_1\0005.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_1\0006.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_1\0007.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_1\0008.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_1\0009.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_1\0010.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_11\0011.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_11\0002.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_11\0003.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_11\0004.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_11\0005.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_11\0006.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_11\0007.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread
('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_11\0008.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_11\0009.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:244 cv::findDecoder imread_('videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_11\0010.jpg'): can't open/read file: check file path/integrity
Traceback (most recent call last):
File "D:\Ai art\Rerender_A_Video\video_blend.py", line 318, in
main(args)
File "D:\Ai art\Rerender_A_Video\video_blend.py", line 267, in main
process_seq(video_sequence, i, blend_histogram, blend_gradient)
File "D:\Ai art\Rerender_A_Video\video_blend.py", line 198, in process_seq
dist1s.append(load_error(bin_a, img_shape))
File "D:\Ai art\Rerender_A_Video\video_blend.py", line 159, in load_error
with open(bin_path, 'rb') as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'videos/pexels-antoni-shkraba-8048492-540x960-25fps\out_1\0002.bin'

folder out_1 contains 1 file 0001.jpg
folder out_11 is empty

Can't go past Installation step 4

after completing first 3 installation steps I get this error if I try to run demo or launch ui (python rerender.py --cfg config/real2sculpture.json / python webUI.py):

PS E:\Programmi\Rerender_A_Video-main> python webUI.py
logging improved.
Traceback (most recent call last):
File "C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ffmpy.py", line 93, in run
self.process = subprocess.Popen(
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 971, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 1456, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] Impossibile trovare il file specificato

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "E:\Programmi\Rerender_A_Video-main\webUI.py", line 856, in
gr.Examples(
File "C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gradio\helpers.py", line 54, in create_examples
examples_obj = Examples(
File "C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gradio\helpers.py", line 201, in init
self.processed_examples = [
File "C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gradio\helpers.py", line 202, in
[
File "C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gradio\helpers.py", line 203, in
component.postprocess(sample)
File "C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gradio\components.py", line 2190, in postprocess
processed_files = (self._format_video(y), None)
File "C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gradio\components.py", line 2237, in _format_video
and not processing_utils.video_is_playable(video)
File "C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gradio\processing_utils.py", line 498, in video_is_playable
output = probe.run(stderr=subprocess.PIPE, stdout=subprocess.PIPE)
File "C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\ffmpy.py", line 98, in run
raise FFExecutableNotFoundError(
ffmpy.FFExecutableNotFoundError: Executable 'ffprobe' not found

Issue with Model Loading in Rerender_A_Video Script

I hope this message finds you well. I wanted to bring to your attention an issue I've been experiencing while trying to run the Rerender_A_Video script as per the provided instructions. Below, I've detailed the problem and the steps I've taken so far.

When I attempt to run the Rerender_A_Video script using the provided configuration file real2sculpture.json without specifying a custom model file, I encounter an error related to model loading. The error message I receive is as follows:

No module 'xformers'. Proceeding without it.
ControlLDM: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Loaded model config from [./deps/ControlNet/models/cldm_v15.yaml]
Loaded state_dict from [./models/control_sd15_canny.pth]
Traceback (most recent call last):
  File "C:\Users\bryan\Rerender_A_Video\rerender.py", line 462, in <module>
    rerender(cfg, args.one, args.key_video_path)
  File "C:\Users\bryan\Rerender_A_Video\rerender.py", line 89, in rerender
    model.load_state_dict(load_file(cfg.sd_model), strict=False)
                          ^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bryan\AppData\Local\Programs\Python\Python311\Lib\site-packages\safetensors\torch.py", line 98, in load_file
    with safe_open(filename, framework="pt", device=device) as f:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: The system cannot find the file specified. (os error 2)

Steps Taken So Far:

  1. I have followed the provided instructions for setting up the environment, including running the installation script to download the required models and dependencies.

  2. I ensured that CUDA 11.8 is installed and checked for GPU availability using torch.cuda.is_available().

  3. I have made sure that the required libraries and packages are installed, as confirmed using pip list.

  4. I have attempted to run the script both with and without specifying a custom model file. The issue persists in both cases.

  5. I have verified the existence of the model file control_sd15_canny.pth in the specified directory.

Despite following the provided steps and confirming that the model file exists, the script encounters issues during model loading.

I kindly request your assistance in resolving this issue. If you require any additional information or logs to diagnose the problem, please let me know, and I will be happy to provide them.

Exploring Solutions for Improved Image Fluidity in Video Creation

I want to thank you for moving this forward; I love where this is going.

I've read about a few issues regarding the ability to use different ControlNets to help achieve a better final image. I've created this video with your software and acknowledged your work as indicated on GitHub. However, if you watch the video, you'll see that I would prefer a more fluid portrayal of the image, without so many artifacts. There's too much movement, and it distorts the image. Is this due to the need for two ControlNets, or is it related to my configuration of the options?

https://x.com/BryanRebooted/status/1708317182436729145?s=20

I believe everyone (Midjourney, Pika Labs etc..) is looking to solve the consistent character/model issue so that we can film ourselves and tell our stories in a unique way — and this software is the closest I've seen to solving this problem.

Notebook!

Can you create a notebook version plz

the glibc version

When I run the render.py to execute the postprocess function, I get the error ./deps/ebsynth/bin/ebsynth: /lib/x86_64-linux-gnu/libm.so.6: version 'GLIBC_2.29' not found (required by ./deps/ebsynth/bin/ebsynth) .
The glibc version of my machine is 2.27, so do I need to upgrade the version of glibc to 2.29? Are there any other methods besides upgrading the version? The machine is A100.

no find 0002.bin,how can I get it

Hello, I have encountered the same problem and cannot find the relevant bin file. My Ebsynth is running normally. I can find the output images in out_1 folder, and there are also npy files in tmp. How can I modify it??
python video_blend.py result/pexels-koolshooters-7322716 --beg 1 --end 101 --itv 10 --key keys --output result/pexels-koolshooters-7322716/blend.mp4 --fps 25.0 --n_proc 4 -ps
/home/ai-tets/miniconda3/envs/lmp_1/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/home/ai-tets/miniconda3/envs/lmp_1/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/home/ai-tets/miniconda3/envs/lmp_1/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/home/ai-tets/miniconda3/envs/lmp_1/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
ebsynth: 113.05732083320618
Traceback (most recent call last):
File "/home/ai-tets/wangzijian/image_generator/Rerender_A_Video-main/video_blend.py", line 330, in
main(args)
File "/home/ai-tets/wangzijian/image_generator/Rerender_A_Video-main/video_blend.py", line 268, in main
process_seq(video_sequence, i, blend_histogram, blend_gradient)
File "/home/ai-tets/wangzijian/image_generator/Rerender_A_Video-main/video_blend.py", line 199, in process_seq
dist1s.append(load_error(bin_a, img_shape))
File "/home/ai-tets/wangzijian/image_generator/Rerender_A_Video-main/video_blend.py", line 159, in load_error
with open(bin_path, 'rb') as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'result/pexels-koolshooters-7322716/out_1/0002.bin'
Traceback (most recent call last):
File "/home/ai-tets/miniconda3/envs/lmp_1/lib/python3.10/site-packages/gradio/routes.py", line 437, in run_predict
output = await app.get_blocks().process_api(
File "/home/ai-tets/miniconda3/envs/lmp_1/lib/python3.10/site-packages/gradio/blocks.py", line 1355, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "/home/ai-tets/miniconda3/envs/lmp_1/lib/python3.10/site-packages/gradio/blocks.py", line 1289, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/home/ai-tets/miniconda3/envs/lmp_1/lib/python3.10/site-packages/gradio/components/video.py", line 249, in postprocess
processed_files = (self._format_video(y), None)
File "/home/ai-tets/miniconda3/envs/lmp_1/lib/python3.10/site-packages/gradio/components/video.py", line 313, in _format_video
video = self.make_temp_copy_if_needed(video)
File "/home/ai-tets/miniconda3/envs/lmp_1/lib/python3.10/site-packages/gradio/components/base.py", line 223, in make_temp_copy_if_needed
temp_dir = self.hash_file(file_path)
File "/home/ai-tets/miniconda3/envs/lmp_1/lib/python3.10/site-packages/gradio/components/base.py", line 187, in hash_file
with open(file_path, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'result/pexels-koolshooters-7322716/blend.mp4'

Install failure

This looks amazing, looking forward to trying it!
I'm getting a few issues while installing:

1. No submodules

When I run python install.py, it tells me that it failed to install Ebsynth, but that's because the /deps/ebsynth folder is empty.

I tried running git clone on that branch, but that doesn't work—I had to extract the ZIP file into the folder and run install.py again.

Same for Controlnet and gmflow—these folders were empty. I wonder if there's a way to automatically populate the folders during the install, or if it's just me who's having trouble.

  1. Triton missing

C:\Users\sanss\ReRender\Rerender_A_Video>python webUI.py A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton'

  1. Local URL doesn't work

Despite it telling me to go to http://0.0.0.0:7861, I get ERR_ADDRESS_INVALID in my browser if I try.

cannot fetch 0.0.0.0:7860, all the installation was done

I finished all the installation steps and get the

Running on local URL:  http://0.0.0.0:7860 

To create a public link, set `share=True` in `launch()`.

but I tried to visit URL, get an error : HTTP ERROR 502

can somebody help me about this issue?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.