ciarastrawberry / temporalkit Goto Github PK
View Code? Open in Web Editor NEWAn all in one solution for adding Temporal Stability to a Stable Diffusion Render via an automatic1111 extension
License: GNU General Public License v3.0
An all in one solution for adding Temporal Stability to a Stable Diffusion Render via an automatic1111 extension
License: GNU General Public License v3.0
Hello,
I have been trying to use TemporalKit in Google Colab using the fast_stable_diffusion_AUTOMATIC1111.ipynb notebook. But I am getting an error when I try to execute the function. I have installed FFMPEG 4.2.7 and have checked that the version is up to date. However, when running the program, I am getting the following error:
OSError: MoviePy error: failed to read the first frame of video file /tmp/tmpcf8i8e_y.mp4. That might mean that the file is corrupted. That may also mean that you are using a deprecated version of FFMPEG. On Ubuntu/Debian for instance the version in the repos is deprecated. Please update to a recent version from the website.
I have Verified the following, the video file path is correct and the video file is not corrupted.
I haven't found any bugs in the code that might be causing problems rendering the video.
crossfade_folder_of_folders
current_dir = os.path.join(root_folder, dirs[0])
IndexError: list index out of range`
Hi, I was able to use TemporalKit in the past, but something has since changed and I cant seem to figure out what the issue is. Sorry if this is simple, I'm a bit of a pleb when it comes to things like this. This is the error that comes up when I try to run TemporalKit:
File "C:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\stable-diffusion-webui\extensions\TemporalKit\scripts\sd-TemporalKit-UI.py", line 117, in preprocess_video
image = General_SD.generate_squares_to_folder(video,fps=fps,batch_size=batch_size, resolution=resolution,size_size=per_side,max_frames=max_frames, output_folder=output_path,border=border_frames, ebsynth_mode=True,max_frames_to_save=max_frames)
File "C:\stable-diffusion-webui\extensions\TemporalKit\scripts\Berry_Method.py", line 202, in generate_squares_to_folder
os.makedirs(output_folder)
File "C:\Users\Dawson\AppData\Local\Programs\Python\Python310\lib\os.py", line 225, in makedirs
mkdir(name, mode)
FileNotFoundError: [WinError 3] The system cannot find the path specified: ''
I can see that it's a path issue, but I don't know what path its trying to reference, nor how I would direct it. I know I have FFMPEG installed, and the path set in environment variables, but it still doesn't work. Any help would be greatly appreciated.
Thanks!
Hi. Sorry, I didn't understand what this script does, can you explain a bit, is there any tutorial?
Thanks
Hi! Amazing and extremely convenient way of using EbSynth with SD for video processing!
My issue is the interpolation method between the img2img output images and keyframes for EbSynth - even though my output frames are huge and high res, the keyframes probably undergo a few resizing steps and it hurts their quality. For example my output image is 3860x5600 px, and my target video resolution is 700x1066. Keyframes got chopped up from the output image and scaled to the final resolution, but lost a lot of definition, and became pixelated.
Images for reference
I'm trying out temporalKit on a 1:47 minute video. Its resolution is 1280x720 and fps is 24. I'm getting this error when trying to recombine -
Traceback (most recent call last):
File "C:\Users\Downloads\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 399, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Downloads\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
File "C:\Users\Downloads\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1022, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Downloads\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Downloads\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Downloads\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\Downloads\StableDiffusion\stable-diffusion-webui\extensions\TemporalKit\scripts\sd-TemporalKit-UI.py", line 211, in recombine_ebsynth
output_video = sd_utility.crossfade_videos(video_paths=generated_videos,fps=fps,overlap_indexes= overlap_indicies,num_overlap_frames= border_frames,output_path=os.path.join(input_folder,"output.mp4"))
File "C:\Users\Downloads\StableDiffusion\stable-diffusion-webui\extensions\TemporalKit\scripts\berry_utility.py", line 605, in crossfade_videos
return pil_images_to_video(output_array, output_path, fps)
File "C:\Users\Downloads\StableDiffusion\stable-diffusion-webui\extensions\TemporalKit\scripts\berry_utility.py", line 506, in pil_images_to_video
np_images = [np.array(img) for img in pil_images]
File "C:\Users\Downloads\StableDiffusion\stable-diffusion-webui\extensions\TemporalKit\scripts\berry_utility.py", line 506, in
np_images = [np.array(img) for img in pil_images]
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 5.33 MiB for an array with shape (1024, 1820, 3) and data type uint8
Am I doing something wrong or is this an issue?
stable-diffusion-webui-master\extensions\TemporalKit\scripts\sd-TemporalKit-UI.py", line 122, in preprocess_video
processed = image[0]
IndexError: list index out of range
Hei there,
thanks a lot for this cool extension!
I watched your quick tutorial but not super clear after the preprocessing when you send to img2img how you use controlnet to transfer the style?
thanks @CiaraStrawberry
I keep getting this error on load, using Vadmandic, also not working in Automatic1111. I install but no tab ever shows up.
Error loading script: Berry_Method.py
Traceback (most recent call last):
File "D:\SD\A1111 Web UI Autoinstaller\stable-diffusion-webui\modules\scripts.py", line 248, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\SD\A1111 Web UI Autoinstaller\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\SD\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\TemporalKit\scripts\Berry_Method.py", line 7, in
import scripts.stable_diffusion_processing as sdprocess
File "D:\SD\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\TemporalKit\scripts\stable_diffusion_processing.py", line 15, in
import scripts.optical_flow_raft as raft
File "D:\SD\A1111 Web UI Autoinstaller\stable-diffusion-webui\extensions\TemporalKit\scripts\optical_flow_raft.py", line 23, in
model = raft_large(weights=Raft_Large_Weights.DEFAULT, progress=False).to(device)
File "D:\SD\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torchvision\models_utils.py", line 142, in wrapper
return fn(*args, **kwargs)
File "D:\SD\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torchvision\models_utils.py", line 228, in inner_wrapper
return builder(*args, **kwargs)
File "D:\SD\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torchvision\models\optical_flow\raft.py", line 836, in raft_large
return _raft(
File "D:\SD\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torchvision\models\optical_flow\raft.py", line 805, in _raft
model.load_state_dict(weights.get_state_dict(progress=progress))
File "D:\SD\A1111 Web UI Autoinstaller\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1624, in load_state_dict
raise TypeError("Expected state_dict to be dict-like, got {}.".format(type(state_dict)))
TypeError: Expected state_dict to be dict-like, got <class 'NoneType'>.
Hi I have installed successfully the latest version, and I want to know what is the best method for generate a long video with EBsynth ? Also thanks for this amazing extension !
bug report to keep track of the this issue, i noticed that SD very slightly crops large images, i always knew it resized them a little to a different resolution, on it's own, but this resizing doesn't follow the rules of your resize settings in img2img, it always crops it.
writing this to keep me focused and remind me because it needs a workaround asap.
Hey, wasn't sure where to reach you, so here goes. Is it possible to somehow set TemporalKit's keyframes per "split" when working with longer videos with over 20 keyframes? I'm using 2x2 grids, but TemporalKit always creates 21 keyframes, so there is one grid with only one image at the end of each split. The other grids are similar to each other, but the grid with only 1 keyframe at the end of each split looks clearly different, causing some unnecessary flickering issues.
Is this already possible, or is it something you would be willing to add in the future?
Hi! Have you worked on a standalone script for this that does not use Auto1111?
Tried both regular and Ebsynth method of generating frames. Seems to work up to the point where it's time to render the clip, then the error occurs.
running TemporalKit Version d07074c (Fri Apr 28 09:26:36 2023)
with Temporarl-Warp I hit run and it eventually gives me the error
with Ebsynth method I generate frames and key frames, but when I hit 'recombine ebsynth' with the following settings
I get the error
Note. the video is only 2secs long so it's under the 20 key frames Ebsynth limit, so it doesn't seem to produce output for short clips.
I've looked through it several times but couldn't find the installation method...
i made first steps and stylized the input files in img2img than bring it back to ebsynth-process put the path like "/workspace/temporalkitfiles/dans", at first 2 videos i got no problem but than problems started, i did solve the problem by moving the folder to another path or something else but those are unrelative solutions, but today, nothing works, i use runpod.io, and just spending my credits to solve this problem :(
Everything appears to be working just fine up to the "recombine ebsynth" step in the process. I have frame folders from ebsynth, a working input video, keyframes all golden. I then run "recombine ebsynth" and it completes without errors, but the final crossfade video is blank.
Here is my command prompt info:
batch settings exists = True
reading path at C:\ffmpeg\input\TemporalKit\batch_settings.txt
[2, 7, 12, 17, 22, 27, 32, 37, 42, 47, 52, 57, 62, 67, 72, 77, 82, 87, 92, 97]
recombining directory out_00002 and out_00007, len 2
recombining directory out_00007 and out_00012, len 5
recombining directory out_00012 and out_00017, len 5
recombining directory out_00017 and out_00022, len 5
recombining directory out_00022 and out_00027, len 5
recombining directory out_00027 and out_00032, len 5
recombining directory out_00032 and out_00037, len 5
recombining directory out_00037 and out_00042, len 5
recombining directory out_00042 and out_00047, len 5
recombining directory out_00047 and out_00052, len 5
recombining directory out_00052 and out_00057, len 5
recombining directory out_00057 and out_00062, len 5
recombining directory out_00062 and out_00067, len 5
recombining directory out_00067 and out_00072, len 5
recombining directory out_00072 and out_00077, len 5
recombining directory out_00077 and out_00082, len 5
recombining directory out_00082 and out_00087, len 5
recombining directory out_00087 and out_00092, len 5
recombining directory out_00092 and out_00097, len 5
going from dir 97 to end at 8
outputting 100 images
[MoviePy] >>>> Building video C:\ffmpeg\input\TemporalKit\crossfade.mp4
[MoviePy] Writing video C:\ffmpeg\input\TemporalKit\crossfade.mp4
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 101/101 [00:02<00:00, 48.41it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: C:\ffmpeg\input\TemporalKit\crossfade.mp4
What could be going wrong? TemporalKit doesn't seem to be generating frames properly. [It apparently only takes a single frame out of the entire 3 second video]
I've made a little video so you can see what's happening: https://www.youtube.com/watch?v=Odutcc8to5o
When I try different videos. The first time I try to run the whole thing through TemporalKit, kind of works, but doesn't generate keyframes. When I try again, this happens again [doesn't even generate frames, just multiples copies of the first frame.]
EDIT: Managed to make it work somehow, but it still isn't generating frames properly, generating some strange results: https://youtu.be/u2Evi9ggUzI
EDIT2: Broke again, it's just generating copies of video's first frame.
UPDATE: If you're having problems with "faulty" results [like the ones in the first edit], it's probably the resolution. [I was at 1024 when I should have picked 1080.]
On the other hand, if TemporalKit is not generating frames properly, that's probably caused because you picked a wrong FPS value.
Enjoying the git. I got it working on a colab, so not sure if that is where the problems are arising.
Number of batches: 4
generating square of height 3038 and width 1708
saved to /content/gdrive/MyDrive/Video/batch 1/input/input22.png
Number of batches: 4
generating square of height 3038 and width 1708
saved to /content/gdrive/MyDrive/Video/batch 1/input/input23.png
Number of batches: 4
Otherwise, having fun experimenting with this thing. Thanks a ton!
The TemporalKit is awesome but I have a bug when I generate the final video here a example : https://www.veed.io/view/9690d304-7b0c-4a31-bf7a-aca114b8dc3c?panel=share
Hi Ciara,
I installed you extension but when i try to run it it get a lot of errors! any idead what they are about and can solve it? thks
reading video file... C:\Users\Ryzen_Reaper\AppData\Local\Temp\879953c216bc574e66d9db11f04de5ccd3c2b905\ORIGINAL SOURCE.mp4████████████████████████████████████| 20/20 [00:02<00:00, 8.69it/s]
Traceback (most recent call last):
File "F:\SD\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "F:\SD\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api
result = await self.call_function(
File "F:\SD\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "F:\SD\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "F:\SD\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "F:\SD\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "F:\SD\stable-diffusion-webui\extensions\TemporalKit\scripts\sd-TemporalKit-UI.py", line 65, in preprocess_video
image = berry.generate_square_from_video(video,fps=fps,batch_size=batch_size, resolution=resolution,size_size=per_side )
File "F:\SD\stable-diffusion-webui\extensions\TemporalKit\scripts\Berry_Method.py", line 168, in generate_square_from_video
frames = extract_frames_movpie(video_data, fps, frames_limit)
File "F:\SD\stable-diffusion-webui\extensions\TemporalKit\scripts\Berry_Method.py", line 581, in extract_frames_movpie
video_info = get_video_info(video_path)
File "F:\SD\stable-diffusion-webui\extensions\TemporalKit\scripts\Berry_Method.py", line 568, in get_video_info
result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
File "C:\Users\Ryzen_Reaper\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 501, in run
with Popen(*popenargs, **kwargs) as process:
File "C:\Users\Ryzen_Reaper\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 969, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\Ryzen_Reaper\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1438, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] O sistema não conseguiu localizar o ficheiro especificado
When i pressed read last settings, i got errors everywhere on the screen and this:
batch settings exists = True
reading path at G:\AI\stable-diffusion-webui\outputs\temporalkit\batch_settings.txt
Traceback (most recent call last):
File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api
result = await self.call_function(
File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "G:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "G:\AI\stable-diffusion-webui\extensions\TemporalKit\scripts\sd-TemporalKit-UI.py", line 341, in update_settings_from_file
fps = int(f.readline().strip())
ValueError: invalid literal for int() with base 10: '30.0'
Does anyone know, whats going on here? Thanks in advance!
Hi, in EbSynth mode, even though i have images in the output folder, the only thing that happens is that "frames" are created from the video, but keyframes are not split into individual frames
Interpolating extra frames
Traceback (most recent call last):
File "C:\Users\Holly\Documents\StableDifusion\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Holly\Documents\StableDifusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api
result = await self.call_function(
File "C:\Users\Holly\Documents\StableDifusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 833, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Holly\Documents\StableDifusion\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Holly\Documents\StableDifusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Holly\Documents\StableDifusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\Holly\Documents\StableDifusion\stable-diffusion-webui\extensions\TemporalKit\scripts\sd-TemporalKit-UI.py", line 118, in preprocess_video
image = General_SD.generate_square_from_video(video,fps=fps,batch_size=batch_size, resolution=resolution,size_size=per_side )
File "C:\Users\Holly\Documents\StableDifusion\stable-diffusion-webui\extensions\TemporalKit\scripts\Berry_Method.py", line 170, in generate_square_from_video
frames = utilityb.extract_frames_movpie(video_data, fps, frames_limit)
File "C:\Users\Holly\Documents\StableDifusion\stable-diffusion-webui\extensions\TemporalKit\scripts\berry_utility.py", line 633, in extract_frames_movpie
video_info = get_video_info(video_path)
File "C:\Users\Holly\Documents\StableDifusion\stable-diffusion-webui\extensions\TemporalKit\scripts\berry_utility.py", line 622, in get_video_info
result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
File "C:\Users\Holly\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 501, in run
with Popen(*popenargs, **kwargs) as process:
File "C:\Users\Holly\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 969, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\Holly\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1438, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified
I have this error and I've tried obvious solutions and can't debug the issue, do you have any suggestions.
Hi CiaraStrawberry! thanks for this amazing extension! I loved it.
not a issue, but 1 question and suggestion.
is the "out_XXX" folder suppose to be put it inside "result"? if not, may I suggest that? I am not sure result folder is for, since it is empty.
suggestion:
I keep re-run the process of "prepare ebysnth", because I update new input looks, but the frames is just keep re-generate and it is actually I don't need it to. What do you think if we could have a toggle in the UI to turn the frames generation on or off, thanks!!
I've installed the extension and seem to have confirmed that everything looks good from all the normal avenues. Confirmed its installed in extensions tab, checked for updates, updated ffmpeg, updated automatic1111, etc. But I still get this error in the terminal...
Temporal-Kit/scripts/Ebsynth_Processing.py", line 10, in
import extensions.TemporalKit.scripts.berry_utility
ModuleNotFoundError: No module named 'extensions.TemporalKit'
And I don't have any extensions.TemporalKit or the like in my modules folder either with a manual check and the global search doesn't find anything either. The tab never shows up and so despite everything seeming to check out, it won't load into the UI.
(Macbook Pro with M1, OS Ventura)
Any ideas why? Thanks
Also tried:
pip install typing-extensions
This was listed as a fix for a few different programs with the same sort of error so I'm leaving it here in case it helps anyone.
I say there was a commit involving fps = int(fps)
I'm getting
[MoviePy] >>>> Building video /home/kaos/Documents/AI/alfred/preproces/blended.mp4
[MoviePy] Writing video /home/kaos/Documents/AI/alfred/preproces/blended.mp4
Traceback (most recent call last):
File "/home/kaos/.pyenv/versions/3.10.10/lib/python3.10/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/home/kaos/.pyenv/versions/3.10.10/lib/python3.10/site-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/home/kaos/.pyenv/versions/3.10.10/lib/python3.10/site-packages/gradio/blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/kaos/.pyenv/versions/3.10.10/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/home/kaos/.pyenv/versions/3.10.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/home/kaos/.pyenv/versions/3.10.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/home/kaos/stable-diffusion-webui/extensions/TemporalKit/scripts/sd-TemporalKit-UI.py", line 137, in apply_image_to_vide_batch
return General_SD.process_video_batch(video_path_old=video,fps=fps,per_side=per_side,batch_size=batch_size,fillindenoise=0,edgedenoise=0,_smol_resolution=output_resolution,square_textures=images,max_frames=max_frames,output_folder=input_folder,border=border_frames)
File "/home/kaos/stable-diffusion-webui/extensions/TemporalKit/scripts/Berry_Method.py", line 319, in process_video_batch
generated_vid = utilityb.pil_images_to_video(combined,save_loc, fps)
File "/home/kaos/stable-diffusion-webui/extensions/TemporalKit/scripts/berry_utility.py", line 512, in pil_images_to_video
clip.write_videofile(output_file,fps,codec='libx264')
File "/home/kaos/.local/lib/python3.10/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "/home/kaos/.pyenv/versions/3.10.10/lib/python3.10/site-packages/moviepy/decorators.py", line 54, in requires_duration
return f(clip, *a, **k)
File "/home/kaos/.local/lib/python3.10/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "/home/kaos/.pyenv/versions/3.10.10/lib/python3.10/site-packages/moviepy/decorators.py", line 137, in use_clip_fps_by_default
return f(clip, *new_a, **new_kw)
File "/home/kaos/.local/lib/python3.10/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "/home/kaos/.pyenv/versions/3.10.10/lib/python3.10/site-packages/moviepy/decorators.py", line 22, in convert_masks_to_RGB
return f(clip, *a, **k)
File "/home/kaos/.pyenv/versions/3.10.10/lib/python3.10/site-packages/moviepy/video/VideoClip.py", line 342, in write_videofile
ffmpeg_write_video(self, filename, fps, codec,
File "/home/kaos/.pyenv/versions/3.10.10/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_writer.py", line 201, in ffmpeg_write_video
writer = FFMPEG_VideoWriter(filename, clip.size, fps, codec = codec,
File "/home/kaos/.pyenv/versions/3.10.10/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_writer.py", line 86, in __init__
'-r', '%.02f' % fps,
TypeError: must be real number, not NoneType
But I'm not syure if it is related?
I am processing a video with 29.97 FPS 1080x1920 (portrait mode)
WHen I press run it generates the right amount of keyframes, but they are all identical.
Any idea what the issue could be?
Hello,
so I'm just playing around trying the various modes but I came across this issue. My base vid is 720p so I made the plates in 2560*1440 so a single key will be 720p again.
But there seems to be something wrong when generating the keys (prepare ebsynth) for each ebsynth folder. The keys have the desired 720p resolution but the quality is really bad, it's very pixelated:
This might be a nice effect if I'm going for pixelation on purpose, but that is not the case at the moment :D
First of all thanks a lot for your work, I haven't yet managed to make it work but it seems promising.
I have a lot to say about the UI tho, this is my opinion but as a UX builder, the whole process is very confusing (even after 3 videos tutorials) and you could probably do away with keeping most of the settings hidden away and set automatically depending on the video. The less the users can fiddle around, the less errors there will be and the easier it will be to use, update and maintain.
Also we usually say that you should make your UI based on what the end user will want to do in the most basic natural langage possible, not directly giving full control of every parameter, especially using technical terms. You should only have to watch tutorials on how to finetune the settings, not make the whole thing work. A good rule of thumbs is if you can't explain it clearly under a 15 words hint and without requiring math skills then it should be auto set (based on the input video or some preset dropdown) and tucked away in some advanced settings
Some details:
IndexError: list index out of range
altho my keys folder is filled with images)Again this is my opinion, so it's not worth much, but I've been running those tools for a while and made some UXs around them, I'm not too dumb yet 75% of the Temporal Kit UX makes me think I'm stupid lmao
Please ensure that ffmpeg isInterpolating extra frames with max frames 450000 and interpolating = False
Error: [WinError 2] The system cannot find the file specified. Please ensure that ffmpeg is installed and available in your system's PATH.
Traceback (most recent call last):
File "D:\Stable_diffusion SD 1.5\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 399, in run_predict
output = await app.get_blocks().process_api(
File "D:\Stable_diffusion SD 1.5\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
File "D:\Stable_diffusion SD 1.5\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1022, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\Stable_diffusion SD 1.5\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\Stable_diffusion SD 1.5\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\Stable_diffusion SD 1.5\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\Stable_diffusion SD 1.5\stable-diffusion-webui\extensions\TemporalKit\scripts\sd-TemporalKit-UI.py", line 93, in preprocess_video
existing_frames = [sd_utility.extract_frames_movpie(data, fps,max_frames=max_total_frames,perform_interpolation=False)]
File "D:\Stable_diffusion SD 1.5\stable-diffusion-webui\extensions\TemporalKit\scripts\berry_utility.py", line 641, in extract_frames_movpie
video_stream = next((stream for stream in video_info['streams'] if stream['codec_type'] == 'video'), None)
TypeError: 'NoneType' object is not subscriptable
installed and available in your system's PATH.
I had to modifiy source to get correct framerate and generate input images correctly
video recorded with bandicam
ffprobe result:
{
"streams": [
{
"index": 0,
"codec_name": "h264",
"codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
"profile": "High",
"codec_type": "video",
"codec_tag_string": "avc1",
"codec_tag": "0x31637661",
"width": 1024,
"height": 1024,
"coded_width": 1024,
"coded_height": 1024,
"closed_captions": 0,
"film_grain": 0,
"has_b_frames": 0,
"sample_aspect_ratio": "1:1",
"display_aspect_ratio": "1:1",
"pix_fmt": "yuv420p",
"level": 40,
"chroma_location": "left",
"field_order": "progressive",
"refs": 1,
"is_avc": "true",
"nal_length_size": "4",
"id": "0x1",
"r_frame_rate": "60/1",
"avg_frame_rate": "11640000/785999",
"time_base": "1/60000",
"start_pts": 0,
"start_time": "0.000000",
"duration_ts": 785999,
"duration": "13.099983",
"bit_rate": "1849552",
"bits_per_raw_sample": "8",
"nb_frames": "194",
"extradata_size": 36,
"disposition": {
"default": 1,
"dub": 0,
"original": 0,
"comment": 0,
"lyrics": 0,
"karaoke": 0,
"forced": 0,
"hearing_impaired": 0,
"visual_impaired": 0,
"clean_effects": 0,
"attached_pic": 0,
"timed_thumbnails": 0,
"captions": 0,
"descriptions": 0,
"metadata": 0,
"dependent": 0,
"still_image": 0
},
"tags": {
"creation_time": "2023-04-29T07:35:17.000000Z",
"language": "eng",
"handler_name": "VideoHandler",
"vendor_id": "[0][0][0][0]"
}
}
]
}
berry_utility.py
function def extract_frames_movpie
I replaced line
original_fps = float(video_stream['avg_frame_rate'].split('/')[0])
with
original_fps = float(video_stream['avg_frame_rate'].split('/')[0])/float(video_stream['avg_frame_rate'].split('/')[1])
I'm having issues when running the script. Initially I saw these errors, and wasn't able to get it to work until I restarted about 3 times, then I was able to run the script once and the script did generate keyframes. When I adjust the keyframes to "-1" and pressed run again, the errors came back. When these error occue, I need to shutdown web-ui at that point (hangs).
I have "git --pull" enabled, so I'm running the latest build of a1111 web-ui.
Traceback (most recent call last):
File "E:\StableDiffusion\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "E:\StableDiffusion\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api
result = await self.call_function(
File "E:\StableDiffusion\venv\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "E:\StableDiffusion\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\StableDiffusion\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "E:\StableDiffusion\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "E:\StableDiffusion\extensions\TemporalKit\scripts\sd-TemporalKit-UI.py", line 115, in preprocess_video
return image[0]
IndexError: list index out of range
hello dev ,thanks alot for this awesome tool
Is ffmpeg used?
Is it possible to specify the bitrate because the image quality is poor?
Thank you for such a great tool.
When running Temporal-Warp I got this error
I'm using moviepy==2.0.0.dev2
because the latest stable version gave me this error #24 so this might be related to the issue
Traceback (most recent call last):
File "/home/coder/automatic/venv/lib/python3.10/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/home/coder/automatic/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/home/coder/automatic/venv/lib/python3.10/site-packages/gradio/blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/coder/automatic/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/home/coder/automatic/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/home/coder/automatic/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/home/coder/automatic/extensions/TemporalKit/scripts/sd-TemporalKit-UI.py", line 128, in apply_image_to_video
return General_SD.process_video_single(video_path=video,fps=fps,per_side=per_side,batch_size=batch_size,fillindenoise=0,edgedenoise=0,_smol_resolution=output_resolution,square_texture=image)
File "/home/coder/automatic/extensions/TemporalKit/scripts/Berry_Method.py", line 340, in process_video_single
generated_video = utilityb.pil_images_to_video(processed_frames, output_video_path, fps)
File "/home/coder/automatic/extensions/TemporalKit/scripts/berry_utility.py", line 509, in pil_images_to_video
clip = ImageSequenceClip(np_images, fps=fps)
File "/home/coder/automatic/venv/lib/python3.10/site-packages/moviepy/video/io/ImageSequenceClip.py", line 95, in __init__
raise Exception(
Exception: Moviepy: ImageSequenceClip requires all images to be the same size
Edit: I tried with the stable moviepy
version and got the same error
import tensorflow as tf
TemporalKit/scripts/optical_flow_raft.py
Lines 186 to 201 in 3dbcb0d
this function raft_flow_to_apply_v2
is not called anywhere I am confused 😵
At first thanks for your contributions.
When I set per side =3 and 5 frames per keyframe, I can not get good result. Because I could not set very Height Resolution, or else the gpu will run out of memory. When I set per side = 2 and 3 frames per keyframe, I could get much better result. If we set side =1 and 1 frame per keyframe, it become to those algorithms to draw each frame with controlNet. My understand is correct or not? But the result you show is much better than those algorithms which can not control the flicker on pictures as you did.
Hi !
I tried the installation through a1111 extensions tab, and the script does show in the scripts section but there is nothing there, and nothing happens if i click save, if i click move it takes me to a random tab depending on whats open
Also i am noticing this error in the cmd if i click save
RuntimeError: Event loop is closed
Startup time: 6.7s (load scripts: 1.7s, create ui: 4.3s, gradio launch: 0.1s, scripts app_started_callback: 0.5s).
Traceback (most recent call last):
File "E:\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "E:\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api
result = await self.call_function(
File "E:\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "E:\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "E:\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "E:\Automatic1111\stable-diffusion-webui\extensions\TemporalKit\scripts\TemporalKitImg2ImgTab.py", line 31, in on_button_click
save_image(lastimage, 'last.png', extension_folder)
File "E:\Automatic1111\stable-diffusion-webui\extensions\TemporalKit\scripts\TemporalKitImg2ImgTab.py", line 22, in save_image
image.save(file_path)
AttributeError: 'NoneType' object has no attribute 'save'
Hi I have downloaded the branch "scenes_and_multi_runs" and I have a error message after install it
Here the bug ( I have installed the scenedetect module ) :
To create a public link, set share=True
in launch()
.
Startup time: 4.1s (load scripts: 0.6s, create ui: 3.0s, gradio launch: 0.4s).
Closing server running on port: 7860
Restarting UI...
Error loading script: Ebsynth_Processing.py
Traceback (most recent call last):
File "C:\AUTOMATIC1111\modules\scripts.py", line 256, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\AUTOMATIC1111\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\AUTOMATIC1111\extensions\TemporalKit\scripts\Ebsynth_Processing.py", line 10, in
import extensions.TemporalKit.scripts.berry_utility
File "C:\AUTOMATIC1111\extensions\TemporalKit\scripts\berry_utility.py", line 20, in
import scenedetect
ModuleNotFoundError: No module named 'scenedetect'
Error loading script: berry_utility.py
Traceback (most recent call last):
File "C:\AUTOMATIC1111\modules\scripts.py", line 256, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\AUTOMATIC1111\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\AUTOMATIC1111\extensions\TemporalKit\scripts\berry_utility.py", line 20, in
import scenedetect
ModuleNotFoundError: No module named 'scenedetect'
Running Stable Diffusion (5ab7f21) on an M2 Mac (Venture 13.3.1).
Getting the following error when hitting the recombine button after successfully preparing ebsynth in the previous step:
saving at frame 172
saved to /Users/me/Documents/promo video/TemporalKit/keys/keys00172.png at size (1820, 1024)
[2, 7, 12, 17, 22, 27, 32, 37, 42, 47, 52, 57, 62, 67, 72, 77, 82, 87, 92, 97, 102, 107, 112, 117, 122, 127, 132, 137, 142, 147, 152, 157, 162, 167, 172]
Traceback (most recent call last):
File "/Users/me/Documents/stable-diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 399, in run_predict
output = await app.get_blocks().process_api(
File "/Users/me/Documents/stable-diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1299, in process_api
result = await self.call_function(
File "/Users/me/Documents/stable-diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1022, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Users/me/Documents/stable-diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Users/me/Documents/stable-diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/Users/me/Documents/stable-diffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/Users/me/Documents/stable-diffusion/stable-diffusion-webui/extensions/TemporalKit/scripts/sd-TemporalKit-UI.py", line 182, in recombine_ebsynth
return ebsynth.crossfade_folder_of_folders(input_folder,fps=fps,return_generated_video_path=True)
File "/Users/me/Documents/stable-diffusion/stable-diffusion-webui/extensions/TemporalKit/scripts/Ebsynth_Processing.py", line 134, in crossfade_folder_of_folders
current_dir = os.path.join(root_folder, dirs[0])
IndexError: list index out of range
Maybe I'm missing something but is it possible to control the height and width of the grid separately? Square is fine sometimes but sometimes a grid of 512x1024 could be useful for full body videos or 1024x512 for wide videos. Thanks
Had no trouble playing with the extension after my first use and then i run into this error message while trying to relaunch auto1111:
Error loading script: Berry_Method.py
Traceback (most recent call last):
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\modules\scripts.py", line 256, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\extensions\TemporalKit\scripts\Berry_Method.py", line 8, in
from moviepy.editor import *
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\venv\lib\site-packages\moviepy\editor.py", line 22, in
from .video.io.VideoFileClip import VideoFileClip
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\venv\lib\site-packages\moviepy\video\io\VideoFileClip.py", line 3, in
from moviepy.video.VideoClip import VideoClip
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\venv\lib\site-packages\moviepy\video\VideoClip.py", line 21, in
from .io.ffmpeg_writer import ffmpeg_write_image, ffmpeg_write_video
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\venv\lib\site-packages\moviepy\video\io\ffmpeg_writer.py", line 11, in
from moviepy.config import get_setting
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\venv\lib\site-packages\moviepy\config.py", line 34, in
from imageio.plugins.ffmpeg import get_exe
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\venv\lib\site-packages\imageio\plugins\ffmpeg.py", line 143, in
import imageio_ffmpeg
ModuleNotFoundError: No module named 'imageio_ffmpeg'
Error loading script: sd-TemporalKit-UI.py
Traceback (most recent call last):
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\modules\scripts.py", line 256, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\extensions\TemporalKit\scripts\sd-TemporalKit-UI.py", line 40, in
import scripts.Berry_Method as berry
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\extensions\TemporalKit\scripts\Berry_Method.py", line 8, in
from moviepy.editor import *
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\venv\lib\site-packages\moviepy\editor.py", line 22, in
from .video.io.VideoFileClip import VideoFileClip
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\venv\lib\site-packages\moviepy\video\io\VideoFileClip.py", line 3, in
from moviepy.video.VideoClip import VideoClip
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\venv\lib\site-packages\moviepy\video\VideoClip.py", line 21, in
from .io.ffmpeg_writer import ffmpeg_write_image, ffmpeg_write_video
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\venv\lib\site-packages\moviepy\video\io\ffmpeg_writer.py", line 11, in
from moviepy.config import get_setting
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\venv\lib\site-packages\moviepy\config.py", line 34, in
from imageio.plugins.ffmpeg import get_exe
File "D:\StableDiffusion\Auto1111\stable-diffusion-webui\venv\lib\site-packages\imageio\plugins\ffmpeg.py", line 143, in
import imageio_ffmpeg
ModuleNotFoundError: No module named 'imageio_ffmpeg'
I was able to get it to run again with a fresh install of auto1111, then ran into the same exact issue when relaunching.
I have installed the requirements but I have a error after I add the video I click on Run :
reading video file... C:\Users\Alex\AppData\Local\Temp\b20b8b368a0dab613dda4605c6b78ea96b90de0c\ghostingtest.mp4
Traceback (most recent call last):
File "C:\AUTOMATIC1111\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "C:\AUTOMATIC1111\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api
result = await self.call_function(
File "C:\AUTOMATIC1111\venv\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\AUTOMATIC1111\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\AUTOMATIC1111\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\AUTOMATIC1111\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\AUTOMATIC1111\extensions\TemporalKit\scripts\sd-TemporalKit-UI.py", line 54, in preprocess_video
image = berry.generate_square_from_video(video,fps=fps,batch_size=batch_size, resolution=resolution,size_size=per_side )
File "C:\AUTOMATIC1111\extensions\TemporalKit\scripts\Berry_Method.py", line 128, in generate_square_from_video
frames = extract_frames_movpie(video_data, fps, frames_limit)
File "C:\AUTOMATIC1111\extensions\TemporalKit\scripts\Berry_Method.py", line 491, in extract_frames_movpie
video_info = get_video_info(video_path)
File "C:\AUTOMATIC1111\extensions\TemporalKit\scripts\Berry_Method.py", line 483, in get_video_info
result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 503, in run
with Popen(*popenargs, **kwargs) as process:
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 971, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1440, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] Le fichier spécifié est introuvable
Have just fresh installed into automatic1111.
Getting multiple errors on launch, all around model downloads from pytorch it seems.
Downloading: "https://download.pytorch.org/models/raft_large_C_T_SKHT_V2-ff5fadd5.pth" to C:\Users\Gabriel/.cache\torch\hub\checkpoints\raft_large_C_T_SKHT_V2-ff5fadd5.pth
Error loading script: Berry_Method.py
Traceback (most recent call last):
File "C:\Users\Gabriel\AppData\Local\Programs\Python\Python310\lib\urllib\request.py", line 1348, in do_open
[...]
File "C:\Users\Gabriel\AppData\Local\Programs\Python\Python310\lib\socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed
Ciara thank you very much for the excellent system created and all the work done.
I have the following problem when using Temporal-Kit in RUNPOD, where I get 1 million comments in the search engines but none of them find the solution:
Many of the solutions talk about editing the ImageFile.py file inside:
"/workspace/venv/lib/python3.10/site-packages/PIL/ImageFile.py"
but those modifications have not been effective for me and when restarting the pod , these modifications also disappear. It may be that because of my poor knowledge of the interface and the language it doesn't give me =(.
I am sure that you geniuses and experts can give me the light that I need.
Blessings to all.
Hi 👋 thanks for the project ❤ I am trying with colab but I got this error I tried to fix it with this main...camenduru:TemporalKit:dev but I broke it 😭
python 3.9.16
torch 2.0.0+cu118
numpy 1.23.3
opencv-python 4.7.0.72
reading video file... /tmp/afca6aee67473a05bb869e8ff86637b542acb258/Untitled.mp4
90
Number of batches: 9
90
Downloading: "https://download.pytorch.org/models/raft_large_C_T_SKHT_V2-ff5fadd5.pth" to /content/stable-diffusion-webui/extensions/SadTalker/checkpoints/hub/checkpoints/raft_large_C_T_SKHT_V2-ff5fadd5.pth
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/PIL/Image.py", line 3080, in fromarray
mode, rawmode = _fromarray_typemap[typekey]
KeyError: ((1, 1), '<i8')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.9/dist-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/content/stable-diffusion-webui/extensions/TemporalKit/scripts/sd-TemporalKit-UI.py", line 64, in apply_image_to_video
return berry.process_video_single(video_path=video,fps=fps,per_side=per_side,batch_size=batch_size,fillindenoise=0,edgedenoise=0,_smol_resolution=output_resolution,square_texture=image)
File "/content/stable-diffusion-webui/extensions/TemporalKit/scripts/Berry_Method.py", line 269, in process_video_single
processed_frames = process_video(frames,fps,batch_size,fillindenoise,edgedenoise,_smol_resolution,square_texture)
File "/content/stable-diffusion-webui/extensions/TemporalKit/scripts/Berry_Method.py", line 377, in process_video
processed_batch,all_flow_after = sdprocess.batch_sd_run(encoded_batch, encoded_new_frame, frame_count, seed, False, fillindenoise, edgedenoise, _smol_resolution,True,encoded_new_frame,False)
File "/content/stable-diffusion-webui/extensions/TemporalKit/scripts/stable_diffusion_processing.py", line 268, in batch_sd_run
result,mask,flow = prepare_request(allpaths,i,output_images[i-1],smol_resolution,seed,last_mask,last_last_mask,fillindenoise,edgedenoise,target,diffuse,Forwards)
File "/content/stable-diffusion-webui/extensions/TemporalKit/scripts/stable_diffusion_processing.py", line 93, in prepare_request
warped_path,flow,unused_mask,whitepixels,flow_img = raft.apply_flow_based_on_images(allpaths[index - 1],allpaths[index],last_stylized,resolution,index,temp_folder)
File "/content/stable-diffusion-webui/extensions/TemporalKit/scripts/optical_flow_raft.py", line 134, in apply_flow_based_on_images
warped_image,unused_mask,white_pixels = apply_flow_to_image_with_unused_mask(provided_image,predicted_flow)
File "/content/stable-diffusion-webui/extensions/TemporalKit/scripts/optical_flow_raft.py", line 276, in apply_flow_to_image_with_unused_mask
mask = utilityb.create_hole_mask(flow)
File "/content/stable-diffusion-webui/extensions/TemporalKit/scripts/berry_utility.py", line 267, in create_hole_mask
toblur = Image.fromarray(expanded).convert('L')
File "/usr/local/lib/python3.9/dist-packages/PIL/Image.py", line 3083, in fromarray
raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1), <i8
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.