Comments (12)
@kaitosea916 Your problem is different.
Your ebsynth didn't work, so there is no 0001.bin
Please refer to https://github.com/williamyang1991/Rerender_A_Video#issues
from fresco.
I went into the EBsynth folder and enabled read and execute permissions and I turned on the log to true as per another closed comment but it didn't seem to fix it... maybe I did it wrong?
woo, it works, thx
from fresco.
I went into the EBsynth folder and enabled read and execute permissions and I turned on the log to true as per another closed comment but it didn't seem to fix it... maybe I did it wrong?
from fresco.
assert h % num_splits == 0 and w % num_splits == 0
you can print h, num_splits, w to see what is wrong. Maybe because your video resolution is not correct.
It's best divisible by 64.
For example, 512x512, 640x640, etc
from fresco.
assert h % num_splits == 0 and w % num_splits == 0
you can print h, num_splits, w to see what is wrong. Maybe because your video resolution is not correct. It's best divisible by 64. For example, 512x512, 640x640, etc
It's 1920 x 1080 res
from fresco.
assert h % num_splits == 0 and w % num_splits == 0
you can print h, num_splits, w to see what is wrong. Maybe because your video resolution is not correct. It's best divisible by 64. For example, 512x512, 640x640, etc
h: 135, num_splits: 2, w: 240 which seems to be fine
(diffusers) C:\Users\infer\OneDrive\Documents\Fresco\FRESCO>python video_blend.py ./output/alien/ --key keys --key_ind 0 11 23 33 49 60 72 82 93 106 --output ./output/alien/blend.mp4 --fps 24 --n_proc 4 -ps
Base directory: ./output/alien/
Key indices: [0, 11, 23, 33, 49, 60, 72, 82, 93, 106]
Key directory: keys
Number of sequences: 9
h: 135, num_splits: 2, w: 240
Process Process-3:
Traceback (most recent call last):
File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 315, in bootstrap
self.run()
File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 108, in run
self.target(*self.args, **self.kwargs)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 115, in process_sequences
process_one_sequence(i, video_sequence)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 82, in process_one_sequence
flow_calc.get_flow(i1, i2, flow_seq[j])
File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "./src/ebsynth\flow\flow_utils.py", line 174, in get_flow
results_dict = self.model(image1,
File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\gmflow.py", line 133, in forward
feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 72, in feature_add_position
feature0_splits = split_feature(feature0, num_splits=attn_splits)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 23, in split_feature
assert h % num_splits == 0 and w % num_splits == 0
AssertionError
h: 135, num_splits: 2, w: 240
Process Process-4:
Traceback (most recent call last):
File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 315, in bootstrap
self.run()
File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 108, in run
self.target(*self.args, **self.kwargs)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 115, in process_sequences
process_one_sequence(i, video_sequence)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 82, in process_one_sequence
flow_calc.get_flow(i1, i2, flow_seq[j])
File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "./src/ebsynth\flow\flow_utils.py", line 174, in get_flow
results_dict = self.model(image1,
File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\gmflow.py", line 133, in forward
feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 72, in feature_add_position
feature0_splits = split_feature(feature0, num_splits=attn_splits)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 23, in split_feature
assert h % num_splits == 0 and w % num_splits == 0
AssertionError
h: 135, num_splits: 2, w: 240
Process Process-1:
Traceback (most recent call last):
File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 315, in bootstrap
self.run()
File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 108, in run
self.target(*self.args, **self.kwargs)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 115, in process_sequences
process_one_sequence(i, video_sequence)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 82, in process_one_sequence
flow_calc.get_flow(i1, i2, flow_seq[j])
File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "./src/ebsynth\flow\flow_utils.py", line 174, in get_flow
results_dict = self.model(image1,
File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\gmflow.py", line 133, in forward
feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 72, in feature_add_position
feature0_splits = split_feature(feature0, num_splits=attn_splits)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 23, in split_feature
assert h % num_splits == 0 and w % num_splits == 0
AssertionError
h: 135, num_splits: 2, w: 240
Process Process-2:
Traceback (most recent call last):
File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 315, in bootstrap
self.run()
File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 108, in run
self.target(*self.args, **self.kwargs)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 115, in process_sequences
process_one_sequence(i, video_sequence)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 82, in process_one_sequence
flow_calc.get_flow(i1, i2, flow_seq[j])
File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "./src/ebsynth\flow\flow_utils.py", line 174, in get_flow
results_dict = self.model(image1,
File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\gmflow.py", line 133, in forward
feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 72, in feature_add_position
feature0_splits = split_feature(feature0, num_splits=attn_splits)
File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 23, in split_feature
assert h % num_splits == 0 and w % num_splits == 0
AssertionError
ebsynth: 2.9724221229553223
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_0\0000.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_0\0001.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_0\0002.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_0\0003.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_0\0004.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_0\0005.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_0\0006.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_0\0007.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_0\0008.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_0\0009.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_0\0010.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_11\0011.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_11\0001.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_11\0002.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_11\0003.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_11\0004.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_11\0005.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_11\0006.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_11\0007.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_11\0008.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread_('./output/alien/out_11\0009.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread_('./output/alien/out_11\0010.jpg'): can't open/read file: check file path/integrity
Traceback (most recent call last):
File "video_blend.py", line 312, in
main(args)
File "video_blend.py", line 272, in main
process_seq(video_sequence, i, blend_histogram, blend_gradient)
File "video_blend.py", line 204, in process_seq
dist1s.append(load_error(bin_a, img_shape))
File "video_blend.py", line 165, in load_error
with open(bin_path, 'rb') as fp:
FileNotFoundError: [Errno 2] No such file or directory: './output/alien/out_0\0001.bin'
(diffusers) C:\Users\infer\OneDrive\Documents\Fresco\FRESCO>
from fresco.
h: 135, num_splits: 2, w: 240 is not fine...
h % num_splits == 1 which will trigger AssertionError for assert h % num_splits == 0 and w % num_splits == 0
from fresco.
h: 135, num_splits: 2, w: 240 is not fine...
h % num_splits == 1 which will trigger AssertionError for
assert h % num_splits == 0 and w % num_splits == 0
Ah haha I’m admittedly using Claude to trouble shoot as I have no clue what that means 😂
I’ll change my res to 1920 x 1024. Both numbers are divisible by 64
from fresco.
h: 135, num_splits: 2, w: 240 is not fine...
h % num_splits == 1 which will trigger AssertionError for
assert h % num_splits == 0 and w % num_splits == 0
1920 x 1024 worked but blue screened me (4090)
1280 x 704 is working but it’s been over and hour and still running at literal max GPU usage (can’t even open a file explorer window it’s that stressed)
why is it so slow? I do this is in regular EB in less than 10 mins but I wanted to try your PS command
from fresco.
You should ask ebsynth's authors why the released ebsynth code is slow than the ebsynth software.
I just use that code. I'm not responsible for optimizing the code to match the performance of the mature and blackbox software.
from fresco.
I have a similar error.
The input video is of the boxer in the example.
The resolution is specified to 256 due to low spec GPU.
Please let me know if you find the cause and how to resolve it.
python video_blend.py output/boxer-punching-towards-camera --key keys --key_ind 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 --output output/boxer-punching-towards-camera/blend.mp4 --fps 10 --n_proc 4 -ps
/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
ebsynth: 35.95934724807739
[ WARN:[email protected]] global loadsave.cpp:248 findDecoder imread_('output/boxer-punching-towards-camera/out_0/0001.jpg'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 findDecoder imread_('output/boxer-punching-towards-camera/out_2/0001.jpg'): can't open/read file: check file path/integrity
Traceback (most recent call last):
File "video_blend.py", line 308, in <module>
main(args)
File "video_blend.py", line 268, in main
process_seq(video_sequence, i, blend_histogram, blend_gradient)
File "video_blend.py", line 200, in process_seq
dist1s.append(load_error(bin_a, img_shape))
File "video_blend.py", line 161, in load_error
with open(bin_path, 'rb') as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'output/boxer-punching-towards-camera/out_0/0001.bin'
Traceback (most recent call last):
File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/queueing.py", line 388, in call_prediction
output = await route_utils.call_process_api(
File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/route_utils.py", line 219, in call_process_api
output = await app.get_blocks().process_api(
File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/blocks.py", line 1440, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/blocks.py", line 1341, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/components/video.py", line 281, in postprocess
processed_files = (self._format_video(y), None)
File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/components/video.py", line 355, in _format_video
video = self.make_temp_copy_if_needed(video)
File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/components/base.py", line 226, in make_temp_copy_if_needed
temp_dir = self.hash_file(file_path)
File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/components/base.py", line 190, in hash_file
with open(file_path, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'output/boxer-punching-towards-camera/blend.mp4'
from fresco.
@williamyang1991 Thank you for your quick response.
As per issus, I gave execute permission to debs/ebsynth/bin/ebsynth and it worked.
It took 7 minutes to run and the output video was a bit inaccurate, probably because of the low spec GPU...
I look forward to your future activities.
from fresco.
Related Issues (20)
- 视频大于1M ,24GPU会爆吗?还是只有我是这样的? HOT 4
- TypeError: Video.__init__() got an unexpected keyword argument 'source' HOT 3
- Enable support to use lightning model
- ImportError: cannot import name 'logger' from 'diffusers.models.unet_2d_condition' (/usr/local/lib/python3.10/dist-packages/diffusers/models/unet_2d_condition.py) HOT 1
- notebook doesn't run HOT 3
- ebsynth generation fails HOT 7
- run on Colab with A100 GPU and 40G GPU RAM still got Error HOT 6
- run webUI.py with a 3 seconds video of 1280x720, got Error HOT 1
- [Run Key Frames] sucess, but [Run Propagation (Ebsynth)] failed: FileNotFoundError: [Errno 2] No such file or directory: 'output/480x480/blend.mp4' HOT 1
- Got RuntimeError:: element 0 of tensors does not require grad and does not have a grad_fn HOT 1
- python video_blend.py Cannot run HOT 3
- Does it support LCM models? HOT 6
- Diffusers HOT 2
- Video blend too slow: 18 vCPU / RTX 3090(24GB) HOT 1
- can not find target file while executing process_seq(video_sequence, i, blend_histogram, blend_gradient) in video_blend.py HOT 21
- blender not found
- Meaning of unet_chunk_size in FRESCOAttnProcessor2_0 codes? HOT 2
- Standalone video_blend usage instructions unclear HOT 1
- bad result of white cat
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fresco.