GithubHelp home page GithubHelp logo

qiuyu96 / codef Goto Github PK

View Code? Open in Web Editor NEW
4.8K 4.8K 390.0 54.29 MB

[CVPR 2024 Highlight] Official PyTorch implementation of CoDeF: Content Deformation Fields for Temporally Consistent Video Processing

Home Page: https://qiuyu96.github.io/CoDeF/

License: Other

Python 98.13% Shell 1.87%

codef's People

Contributors

ezioby avatar henry123-boy avatar ken-ouyang avatar qiuyu96 avatar shenyujun avatar zkcys001 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

codef's Issues

About tinycudann

I want to use my new video to train, I had prepared the environment and installed the requirement libraries. But after running the comand sh scripts/train_multi.sh, I face the problem No module named 'tinycudann', and I try to use pip/pip3/conda to install it, however it can not be found. I want to kown how to deal with it?

My canonical image generation is so bad. May I ask where I made a mistake

Hello author, your project presentation was impressive. Using your instance training set to redraw the video was great, but I trained and generated my own video with poor results. Could you please help me see what the reason is?
My training parameter settings are as follows
image
My picture looks like this
0001
The content of the video is that the character is jumping and moving.
The model used after training is the last one.But the canonical imageI got is like this.
canonical_0
This also led to poor video effects in the subsequent rendering

How to process two canonical images

Thank you for doing such a great job!

I want to use controlnet for video style migration. For your example beauty_1, the step of reconstruction generates two canonical images. If I simply use the controlnet to transfer these tow images, the generated video is bad even with a same controlnet seed. May I ask which step is wrong with me?

Thank you!

Question about Canonical Step

Thanks for great work!
I notice all images in transformed video are warped by the first transformed img], aka canonical_img, which makes me little confused.
code link
I'm wondering the case like people turning around could be well handled by this field.

Windows support?

Hello, tinycudann are really hard to install on windows, at least I failed on cuda11.8 + windows.
Would consider add vanilla pytorch in modeling so that users can use without tinycudann?

A minor question about code

Hi,
I don't understand the meaning of this line of code. Can you help explain it?Thanks!

CoDeF/train.py

Line 202 in 2407bfa

pe_deformed_grid = (deformed_grid + 0.3) / 1.6

by the way,is there any way that could print the model network of tinycudann? E.g. I want to see what happened In ‘implicit_video.encoder (tcnn.Network)’

with masks test_canonical.sh report error : expected grid and input to have same batch size

File "D:\workspace\CoDeF\train.py", line 214, in forward
results = torch.nn.functional.grid_sample(
File "C:\Users\ASUS\anaconda3\lib\site-packages\torch\nn\functional.py", line 4244, in grid_sample
return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners)
RuntimeError: grid_sampler(): expected grid and input to have same batch size, but got input with sizes [0, 3, 540, 540] and grid with sizes [1, 291600, 1, 2]

But if I config mask_dir= null, the code runs well.

How about Ebsynth.

Hello, this project is very good, I am very grateful to the authors for open source.
I am not a professional in artificial intelligence, but I am very interested in it.
There is a question, may I simply understand that this method is an upgraded version of the previous Ebsynth method? Maybe my cognition is relatively superficial, please forgive me.

The Ebsynth‘s site is https://ebsynth.com/

效果调优请教

你好,
本地尝试了一些视频,效果不是很稳定,请教一下看是哪里出了问题

第一个视频,46帧,视频画面基本不变,效果还是很稳定的
https://github.com/qiuyu96/CoDeF/assets/12007227/4cceed26-5d16-4770-88b2-f35ee1bf8b9a

第二个视频,64帧,即使用了flow和masks依然效果不好,提取的canonical图片也不是很好,下边分别是提取的canonical图,基于canonical的重构结果,以及转换结果(这是没有用flow和mask的结果,用了提升也很小,参数用的默认)
canonical_0 (3)
https://github.com/qiuyu96/CoDeF/assets/12007227/dd128b1b-bee9-4888-8a1f-e927d5eadcf3
https://github.com/qiuyu96/CoDeF/assets/12007227/6bba232b-2f36-4d31-bd22-e5eb97ec2521

第三个视频,300帧,是第二个视频的加长版本,这次提取的canonical图片就更乱了,重构的视频水影严重,转换的视频画面也不动了(训练step用了50000)
canonical_0 (1)
image
https://github.com/qiuyu96/CoDeF/assets/12007227/2d77d41d-41b4-43b8-ae30-8404f52a27eb

总体而言,加了flow的效果会有提升,加了mask的提升不如flow,视频动作稍快一些,canonical的提取就明显的差了,请教一下如何能保持效果相对稳定一些,谢谢

The indexing check on line 151 of video_dataset.py, I was wrong, I'm sorry

'flows': self.all_flows[idx] if (idx<len(self.all_images)-2)&(self.all_flows is not None) else -1e5,
The index check here may be incorrect, causing the index to exceed the list range. I suggest making the following modifications:
'flows': self.all_flows[idx] if (idx < len(self.all_images) - 2) and (self.all_flows is not None) and (idx < len(self.all_flows)) else -1e5,

GPU memory question

Does it have to be 48GB? Can my V100-32GB be used? What is the minimum GPU memory?

Why is it happen?

(codef) [root@prod-emr-gpu01 torch]# pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
Looking in indexes: http://de.mirrors.cloud.aliyuncs.com/pypi/simple/
Collecting git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
Cloning https://github.com/NVlabs/tiny-cuda-nn/ to /tmp/pip-req-build-lyo_m2r9
Running command git clone --filter=blob:none --quiet https://github.com/NVlabs/tiny-cuda-nn/ /tmp/pip-req-build-lyo_m2r9
Resolved https://github.com/NVlabs/tiny-cuda-nn/ to commit NVlabs/tiny-cuda-nn@6f018a9
Running command git submodule update --init --recursive -q
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [22 lines of output]
/tmp/pip-req-build-lyo_m2r9/bindings/torch/setup.py:5: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
from pkg_resources import parse_version
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "/tmp/pip-req-build-lyo_m2r9/bindings/torch/setup.py", line 11, in
from torch.utils.cpp_extension import BuildExtension, CUDAExtension
File "/data/miniconda3/envs/codef/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 19, in
from .hipify import hipify_python
File "/data/miniconda3/envs/codef/lib/python3.9/site-packages/torch/utils/hipify/hipify_python.py", line 34, in
from .cuda_to_hip_mappings import CUDA_TO_HIP_MAPPINGS
File "/data/miniconda3/envs/codef/lib/python3.9/site-packages/torch/utils/hipify/cuda_to_hip_mappings.py", line 34, in
rocm_path = subprocess.check_output(["hipconfig", "--rocmpath"]).decode("utf-8")
File "/data/miniconda3/envs/codef/lib/python3.9/subprocess.py", line 424, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/data/miniconda3/envs/codef/lib/python3.9/subprocess.py", line 505, in run
with Popen(*popenargs, **kwargs) as process:
File "/data/miniconda3/envs/codef/lib/python3.9/subprocess.py", line 951, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "/data/miniconda3/envs/codef/lib/python3.9/subprocess.py", line 1837, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
NotADirectoryError: [Errno 20] Not a directory: 'hipconfig'
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

(codef) [root@prod-emr-gpu01 torch]# python
Python 3.9.17 | packaged by conda-forge | (main, Aug 10 2023, 07:02:31)
[GCC 12.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

import torch
print(torch.version)
2.0.1+cu117
print(torch.cuda.is_available())
True

My tests don't work very well, am I missing something?

  1. my base_control image :
    00017-2710618556
    my transformed video:
beauty_0_base_transformed_dual.mp4
  1. If I train my own model(my video), do I only need one image sequence?
    how to train myself videos

  2. about base_control (by ControlNet v1.1 ues which meath: depth?Openpose?SoftEdge or another)

请教如何做一个新的视频风格转换?

您好!我已经在英伟达显卡的服务器上构建好本项目,而且也跑了./scripts/test_multi.sh和./scripts/test_canonical.sh可以看到结果与预期的效果是一致的,但是我不知道如何用我自己的视频来做这个风格转换。请问用我自己的视频来做风格转换需要哪些步骤?谢谢。

For videos that change rapidly

it's really a very good job, but i have some problems.I want to do a video translation of a dancing video, but the canonical image is very bad, as follows:
canonical_0 (1)
I wonder if my hyperparameters are set incorrectly? Or is it slightly less effective for videos with a faster rate of change?

bad canonical image?

Hi, I test reconstruction using this video, though reconstruction is ok, but I get strange canonical image as follows

walf_base_dual.mp4

image

speedup the training

How to adjust the config to make the training faster if large GPU memory is available? Thanks!

No reconstructed videos, after running “./scripts/test_multi.sh”

After running “./scripts/test_multi.sh”,I didn't find the reconstructed videos in “results/all_sequences/{NAME}/{EXP_NAME}”.

I need help.

./scripts/test_multi.sh

ModelCheckpoint(save_last=True, save_top_k=-1, monitor=None) will duplicate the last checkpoint saved.
/root/miniconda3/lib/python3.10/site-packages/lightning_fabric/connector.py:562: UserWarning: 16 is supported for historical reasons but its usage is discouraged. Please set your precision to 16-mixed instead!
rank_zero_warn(
Using 16bit Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1

distributed_backend=nccl
All distributed processes registered. Starting with 1 processes

LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
/root/miniconda3/lib/python3.10/site-packages/torch/utils/tensorboard/init.py:4: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if not hasattr(tensorboard, "version") or LooseVersion(
/root/miniconda3/lib/python3.10/site-packages/torch/utils/tensorboard/init.py:6: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
) < LooseVersion("1.15"):
/root/miniconda3/lib/python3.10/site-packages/pytorch_lightning/utilities/data.py:103: UserWarning: Total length of DataLoader across ranks is zero. Please make sure this was your intention.
rank_zero_warn(
TEST Profiler Report


| Action | Mean duration (s) | Num calls | Total time (s) | Percentage % |

| Total | - | 11 | 0.74908 | 100 % |

| [Callback]ModelCheckpoint{'monitor': None, 'mode': 'max', 'every_n_train_steps': 2000, 'every_n_epochs': 0, 'train_time_interval': None}.setup | 0.00045171 | 1 | 0.00045171 | 0.060302 |
| [Callback]TQDMProgressBar.setup | 1.1806e-05 | 1 | 1.1806e-05 | 0.0015761 |
| [LightningModule]ImplicitVideoSystem.setup | 1.043e-05 | 1 | 1.043e-05 | 0.0013924 |
| [Callback]TQDMProgressBar.teardown | 8.535e-06 | 1 | 8.535e-06 | 0.0011394 |
| [LightningModule]ImplicitVideoSystem.teardown | 6.5661e-06 | 1 | 6.5661e-06 | 0.00087654 |
| [LightningModule]ImplicitVideoSystem.configure_callbacks | 2.96e-06 | 1 | 2.96e-06 | 0.00039515 |
| [Callback]ModelSummary.setup | 2.274e-06 | 1 | 2.274e-06 | 0.00030357 |
| [LightningModule]ImplicitVideoSystem.configure_sharded_model | 2.252e-06 | 1 | 2.252e-06 | 0.00030063 |
| [Callback]ModelSummary.teardown | 1.909e-06 | 1 | 1.909e-06 | 0.00025485 |
| [LightningModule]ImplicitVideoSystem.prepare_data | 1.554e-06 | 1 | 1.554e-06 | 0.00020745 |
| [Callback]ModelCheckpoint{'monitor': None, 'mode': 'max', 'every_n_train_steps': 2000, 'every_n_epochs': 0, 'train_time_interval': None}.teardown | 1.544e-06 | 1 | 1.544e-06 | 0.00020612 |

A little question about the formula of flow loss in paper.

In paper, the formula of flow loss is
image
and the paper introduce the $F$ as the optical flow.
However, the paper also mention

Corresponding points identified by flows with high confidence should be the same points in the canonical field

which fits my intuition, and I don't know why we subtract the optical flow term here?
In code I also don't find this optical flow term:
`# Optical flow loss.

        if ret.flow_loss[0] != 0:

            mk_flow_t = torch.logical_and(mk_t, flows[0].sum(dim=-1)< 3)

            loss = loss + torch.nn.functional.l1_loss(

                ret.flow_loss[i][0][mk_flow_t], ret.flow_loss[i][1]

                [mk_flow_t]) * self.hparams.flow_loss

`

Can you please explain why this optical flow term is in the formula? Thanks.

Hola

Hola mundo!!!

Equation 5

In equation (5) you mention w as a function of j,k but only j appears in the equation:

image

So how k plays a role in it?

OSError: [Errno 32] Broken pipe

Hello! when I ran ./scripts/test_multi.sh using the provided data, the following error occurred. How can I solve this problem? Thanks~

You are using a CUDA device ('NVIDIA GeForce RTX 4080') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:4: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
  if not hasattr(tensorboard, "__version__") or LooseVersion(
/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:6: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
  ) < LooseVersion("1.15"):
Testing DataLoader 0:   1%|▏                   | 1/99 [00:00<00:11,  8.50it/s]/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/skvideo/io/ffmpeg.py:466: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
  self._proc.stdin.write(vid.tostring())
Traceback (most recent call last):
  File "/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/skvideo/io/ffmpeg.py", line 466, in writeFrame
    self._proc.stdin.write(vid.tostring())
BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 557, in <module>
    main(hparams)
  File "train.py", line 550, in main
    trainer.test(system, dataloaders=system.test_dataloader())
  File "/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 706, in test
    return call._call_and_handle_interrupt(
  File "/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 42, in _call_and_handle_interrupt
    return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
  File "/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 92, in launch
    return function(*args, **kwargs)
  File "/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 749, in _test_impl
    results = self._run(model, ckpt_path=ckpt_path)
  File "/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 935, in _run
    results = self._run_stage()
  File "/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 971, in _run_stage
    return self._evaluation_loop.run()
  File "/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/pytorch_lightning/loops/utilities.py", line 177, in _decorator
    return loop_run(self, *args, **kwargs)
  File "/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 115, in run
    self._evaluation_step(batch, batch_idx, dataloader_idx)
  File "/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 375, in _evaluation_step
    output = call._call_strategy_hook(trainer, hook_name, *step_kwargs.values())
  File "/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 288, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp.py", line 348, in test_step
    return self.model.test_step(*args, **kwargs)
  File "train.py", line 489, in test_step
    self.video_visualizer.add(img)
  File "/home/ft/Project/CoDeF/utils/video_visualizer.py", line 95, in add
    self.video.writeFrame(frame)
  File "/home/ft/.conda/envs/VIST/lib/python3.8/site-packages/skvideo/io/ffmpeg.py", line 471, in writeFrame
    raise IOError(msg)
OSError: [Errno 32] Broken pipe

FFMPEG COMMAND:
/home/ft/.conda/envs/VIST/bin/ffmpeg -y -f rawvideo -pix_fmt rgb24 -s 540x960 -i - -r 30.00 -s 540x960 -vcodec libx264 -crf 1 -pix_fmt yuv420p /home/ft/Project/CoDeF/results/all_sequences/scene_0/base/scene_0_base.mp4

FFMPEG STDERR OUTPUT:

Testing DataLoader 0:   1%|          | 1/99 [00:00<00:47,  2.07it/s] 

how to change batchsize?

RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2916000 but got size 29160000 for tensor number 1 in the list.

I just changed batchsize in yalm for 10. it happened.

Few questions about the paper

Hi, really thanks for the great work! I am new to the video processing and are confused about several places in the paper, really hope you can provide me some hints, thanks in advance :)

  1. what's the advantage of using hash table?
  2. Are the values in the 3D hash table trainable? How are their initial values determined?
  3. What are $\Delta x$, $\Delta y$ in Fig. 2? They don't seem to be mentioned in the main text. Do they represent residuals that will be added to the input coordinates (x, y)?
  4. Upon the optimization of the content deformation field, the canonical image $I_c$ is retrieved by setting the deformation of all points to zero. What does "setting the deformation of all points to zero" mean? Can I understand it as simply traversing all possible (x', y') coordinates and feeding them into the canonical MLP to obtain the canonical image?

Really appreciate your time, looking forward to your reply.

[Errno 32] Broken pipe - ffmpeg

Hi, I have some question when using a custom dataset. I completed training well and got a checkpoint, but the following Broken pipe error occurred during the test. How can I solve it?
Screenshot from 2023-08-18 15-39-41

video length

thanks for your great work!
May I ask how long video length this method can support?

About Training Model Hyperparameters Settings?

May I ask if you will publish a document about training hyperparameter settings. I found that many people are asking about long videos and fast exercise training skills. I saw your reply, but we don’t know how to set them up

image
image
image
image

and I found this question link crashed, here is why
#39

ffmpeg error???and OSError: [Errno 32] Broken pipe?????

/data/anaconda3/envs/codef/lib/python3.10/site-packages/lightning_utilities/core/imports.py:13: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
/data/anaconda3/envs/codef/lib/python3.10/site-packages/lightning_fabric/init.py:36: DeprecationWarning: Deprecated call to pkg_resources.declare_namespace('lightning_fabric').
Implementing implicit namespace packages (as specified in PEP 420) is preferred to pkg_resources.declare_namespace. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
import("pkg_resources").declare_namespace(name)
/data/anaconda3/envs/codef/lib/python3.10/site-packages/torchmetrics/utilities/imports.py:24: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
_PYTHON_LOWER_3_8 = LooseVersion(_PYTHON_VERSION) < LooseVersion("3.8")
/data/anaconda3/envs/codef/lib/python3.10/site-packages/torchmetrics/utilities/imports.py:24: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
_PYTHON_LOWER_3_8 = LooseVersion(_PYTHON_VERSION) < LooseVersion("3.8")
/data/anaconda3/envs/codef/lib/python3.10/site-packages/pytorch_lightning/init.py:36: DeprecationWarning: Deprecated call to pkg_resources.declare_namespace('pytorch_lightning').
Implementing implicit namespace packages (as specified in PEP 420) is preferred to pkg_resources.declare_namespace. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
import("pkg_resources").declare_namespace(name)
ModelCheckpoint(save_last=True, save_top_k=-1, monitor=None) will duplicate the last checkpoint saved.
/data/anaconda3/envs/codef/lib/python3.10/site-packages/lightning_fabric/connector.py:562: UserWarning: 16 is supported for historical reasons but its usage is discouraged. Please set your precision to 16-mixed instead!
rank_zero_warn(
Using 16bit Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
all_sequences/scene_0/scene_0/00000.png
all_sequences/scene_0/scene_0/00001.png
all_sequences/scene_0/scene_0/00002.png
all_sequences/scene_0/scene_0/00003.png
all_sequences/scene_0/scene_0/00004.png
all_sequences/scene_0/scene_0/00005.png
all_sequences/scene_0/scene_0/00006.png
all_sequences/scene_0/scene_0/00007.png
all_sequences/scene_0/scene_0/00008.png
all_sequences/scene_0/scene_0/00009.png
all_sequences/scene_0/scene_0/00010.png
all_sequences/scene_0/scene_0/00011.png
all_sequences/scene_0/scene_0/00012.png
all_sequences/scene_0/scene_0/00013.png
all_sequences/scene_0/scene_0/00014.png
all_sequences/scene_0/scene_0/00015.png
all_sequences/scene_0/scene_0/00016.png
all_sequences/scene_0/scene_0/00017.png
all_sequences/scene_0/scene_0/00018.png
all_sequences/scene_0/scene_0/00019.png
all_sequences/scene_0/scene_0/00020.png
all_sequences/scene_0/scene_0/00021.png
all_sequences/scene_0/scene_0/00022.png
all_sequences/scene_0/scene_0/00023.png
all_sequences/scene_0/scene_0/00024.png
all_sequences/scene_0/scene_0/00025.png
all_sequences/scene_0/scene_0/00026.png
all_sequences/scene_0/scene_0/00027.png
all_sequences/scene_0/scene_0/00028.png
all_sequences/scene_0/scene_0/00029.png
all_sequences/scene_0/scene_0/00030.png
all_sequences/scene_0/scene_0/00031.png
all_sequences/scene_0/scene_0/00032.png
all_sequences/scene_0/scene_0/00033.png
all_sequences/scene_0/scene_0/00034.png
all_sequences/scene_0/scene_0/00035.png
all_sequences/scene_0/scene_0/00036.png
all_sequences/scene_0/scene_0/00037.png
all_sequences/scene_0/scene_0/00038.png
all_sequences/scene_0/scene_0/00039.png
all_sequences/scene_0/scene_0/00040.png
all_sequences/scene_0/scene_0/00041.png
all_sequences/scene_0/scene_0/00042.png
all_sequences/scene_0/scene_0/00043.png
all_sequences/scene_0/scene_0/00044.png
all_sequences/scene_0/scene_0/00045.png
all_sequences/scene_0/scene_0/00046.png
all_sequences/scene_0/scene_0/00047.png
all_sequences/scene_0/scene_0/00048.png
all_sequences/scene_0/scene_0/00049.png
all_sequences/scene_0/scene_0/00050.png
all_sequences/scene_0/scene_0/00051.png
all_sequences/scene_0/scene_0/00052.png
all_sequences/scene_0/scene_0/00053.png
all_sequences/scene_0/scene_0/00054.png
all_sequences/scene_0/scene_0/00055.png
all_sequences/scene_0/scene_0/00056.png
all_sequences/scene_0/scene_0/00057.png
all_sequences/scene_0/scene_0/00058.png
all_sequences/scene_0/scene_0/00059.png
all_sequences/scene_0/scene_0/00060.png
all_sequences/scene_0/scene_0/00061.png
all_sequences/scene_0/scene_0/00062.png
all_sequences/scene_0/scene_0/00063.png
all_sequences/scene_0/scene_0/00064.png
all_sequences/scene_0/scene_0/00065.png
all_sequences/scene_0/scene_0/00066.png
all_sequences/scene_0/scene_0/00067.png
all_sequences/scene_0/scene_0/00068.png
all_sequences/scene_0/scene_0/00069.png
all_sequences/scene_0/scene_0/00070.png
all_sequences/scene_0/scene_0/00071.png
all_sequences/scene_0/scene_0/00072.png
all_sequences/scene_0/scene_0/00073.png
all_sequences/scene_0/scene_0/00074.png
all_sequences/scene_0/scene_0/00075.png
all_sequences/scene_0/scene_0/00076.png
all_sequences/scene_0/scene_0/00077.png
all_sequences/scene_0/scene_0/00078.png
all_sequences/scene_0/scene_0/00079.png
all_sequences/scene_0/scene_0/00080.png
all_sequences/scene_0/scene_0/00081.png
all_sequences/scene_0/scene_0/00082.png
all_sequences/scene_0/scene_0/00083.png
all_sequences/scene_0/scene_0/00084.png
all_sequences/scene_0/scene_0/00085.png
all_sequences/scene_0/scene_0/00086.png
all_sequences/scene_0/scene_0/00087.png
all_sequences/scene_0/scene_0/00088.png
all_sequences/scene_0/scene_0/00089.png
all_sequences/scene_0/scene_0/00090.png
all_sequences/scene_0/scene_0/00091.png
all_sequences/scene_0/scene_0/00092.png
all_sequences/scene_0/scene_0/00093.png
all_sequences/scene_0/scene_0/00094.png
all_sequences/scene_0/scene_0/00095.png
all_sequences/scene_0/scene_0/00096.png
all_sequences/scene_0/scene_0/00097.png
all_sequences/scene_0/scene_0/00098.png
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1

distributed_backend=nccl
All distributed processes registered. Starting with 1 processes

You are using a CUDA device ('NVIDIA A10') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
/data/anaconda3/envs/codef/lib/python3.10/site-packages/torch/utils/tensorboard/init.py:4: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if not hasattr(tensorboard, "version") or LooseVersion(
/data/anaconda3/envs/codef/lib/python3.10/site-packages/torch/utils/tensorboard/init.py:6: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
) < LooseVersion("1.15"):
Testing DataLoader 0: 1%|? | 1/99 [00:00<00:16, 5.84it/s]/data/anaconda3/envs/codef/lib/python3.10/site-packages/skvideo/io/ffmpeg.py:466: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
self._proc.stdin.write(vid.tostring())
Traceback (most recent call last):
File "/data/anaconda3/envs/codef/lib/python3.10/site-packages/skvideo/io/ffmpeg.py", line 466, in writeFrame
self._proc.stdin.write(vid.tostring())
BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/data/CoDeF/train.py", line 557, in
main(hparams)
File "/data/CoDeF/train.py", line 550, in main
trainer.test(system, dataloaders=system.test_dataloader())
File "/data/anaconda3/envs/codef/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 706, in test
return call._call_and_handle_interrupt(
File "/data/anaconda3/envs/codef/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 42, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/data/anaconda3/envs/codef/lib/python3.10/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 92, in launch
return function(*args, **kwargs)
File "/data/anaconda3/envs/codef/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 749, in _test_impl
results = self._run(model, ckpt_path=ckpt_path)
File "/data/anaconda3/envs/codef/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 935, in _run
results = self._run_stage()
File "/data/anaconda3/envs/codef/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 971, in _run_stage
return self._evaluation_loop.run()
File "/data/anaconda3/envs/codef/lib/python3.10/site-packages/pytorch_lightning/loops/utilities.py", line 177, in _decorator
return loop_run(self, *args, **kwargs)
File "/data/anaconda3/envs/codef/lib/python3.10/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 115, in run
self._evaluation_step(batch, batch_idx, dataloader_idx)
File "/data/anaconda3/envs/codef/lib/python3.10/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 375, in _evaluation_step
output = call._call_strategy_hook(trainer, hook_name, *step_kwargs.values())
File "/data/anaconda3/envs/codef/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 288, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/data/anaconda3/envs/codef/lib/python3.10/site-packages/pytorch_lightning/strategies/ddp.py", line 348, in test_step
return self.model.test_step(*args, **kwargs)
File "/data/CoDeF/train.py", line 489, in test_step
self.video_visualizer.add(img)
File "/data/CoDeF/utils/video_visualizer.py", line 95, in add
self.video.writeFrame(frame)
File "/data/anaconda3/envs/codef/lib/python3.10/site-packages/skvideo/io/ffmpeg.py", line 471, in writeFrame
raise IOError(msg)
OSError: [Errno 32] Broken pipe

FFMPEG COMMAND:
/usr/local/bin/ffmpeg -y -f rawvideo -pix_fmt rgb24 -s 540x960 -i - -r 30.00 -s 540x960 -vcodec libx264 -crf 1 -pix_fmt yuv420p /data/CoDeF/results/all_sequences/scene_0/base/scene_0_base.mp4

FFMPEG STDERR OUTPUT:

Testing DataLoader 0: 1%| | 1/99 [00:00<00:42, 2.31it/s]

太难了,全是报错,已经装一周了还没装好

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.