GithubHelp home page GithubHelp logo

yanx27 / everybodydancenow_reproduce_pytorch Goto Github PK

View Code? Open in Web Editor NEW
602.0 23.0 174.0 347.76 MB

Everybody dance now reproduced in pytorch

License: MIT License

Python 100.00%
pytorch pix2pixhd openpose everybody-dance-now pose2pose

everybodydancenow_reproduce_pytorch's Introduction

everybodydancenow_reproduce_pytorch's People

Contributors

irone-g-g avatar seaweiqing avatar yanx27 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

everybodydancenow_reproduce_pytorch's Issues

pix2pixHD raise('Generator must exist!') TypeError: exceptions must derive from BaseException

Traceback (most recent call last):
File "train_pose2vid.py", line 127, in
main()
File "train_pose2vid.py", line 41, in main
model = create_model(opt)
File "/XXX/EverybodyDanceNow_reproduce_pytorch/src/pix2pixHD/models/models.py", line 15, in create_model
model.initialize(opt)
File "/XXX/EverybodyDanceNow_reproduce_pytorch/src/pix2pixHD/models/pix2pixHD_model.py", line 64, in initialize
self.load_network(self.netG, 'G', opt.which_epoch, pretrained_path)
File "/XXX/EverybodyDanceNow_reproduce_pytorch/src/pix2pixHD/models/base_model.py", line 60, in load_network
raise('Generator must exist!')
TypeError: exceptions must derive from BaseException

Any help is appreciated!

ValueError: Invalid file object: <_io.BufferedReader name=18>

Traceback (most recent call last):
File "make_gif.py", line 54, in
anim.save("output.gif", writer="imagemagick")
File "/XXX/matplotlib/animation.py", line 1174, in save
writer.grab_frame(**savefig_kwargs)
File "/XXX/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/XXX/matplotlib/animation.py", line 232, in saving
self.finish()
File "/XXX/matplotlib/animation.py", line 358, in finish
self.cleanup()
File "/XXX/matplotlib/animation.py", line 395, in cleanup
out, err = self._proc.communicate()
File "/XXX/python3.6/subprocess.py", line 843, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/XXX/python3.6/subprocess.py", line 1505, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/XXX/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/XXX/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/XXX/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/hXXX/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=18>

Any help is appreciated!

multiprocessing error in transfer.py

if you are using windows environment, you need to set nThreads=0 both train_opt.py and test_opt.py in EverybodyDanceNow_reproduce_pytorch/src/config/,or you will have an error about multiprocessing.

Requirements

@yanx27
Kindly add a requirements file from which all the modules can be installed by using pip command
pip install -r requirements

Regards:
Abdul Bari Khan

Face enhancement process doesn't work

I found that the parameters of this part of code is a little confusing. I can't let it run according to the Readme, since the data path is not consistent in different files.

target video

In the training phase, the target video would be the same as source video? or different .mp4 ?
can you share the target video as well, please?
Make target pictures
Put target video mv.mp4 in ./data/target/ and run make_target.py, pose.npy will save in ./data/target/, which contain the coordinate of faces (will use in step6).

invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number

Traceback (most recent call last):
File "train_pose2vid.py", line 147, in
main()
File "train_pose2vid.py", line 104, in main
errors = {k: v.data[0] if not isinstance(v, int) else v for k, v in loss_dict.items()}
File "train_pose2vid.py", line 104, in
errors = {k: v.data[0] if not isinstance(v, int) else v for k, v in loss_dict.items()}
IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number

Anybody meet the same problem?

GPU selection

In train_target_images.py and make_source_images.py, there are
torch.cuda.set_device(0)
However from the document https://pytorch.org/docs/0.4.1/cuda.html?highlight=set_device#torch.cuda.set_device, we can see that

Usage of this function is discouraged in favor of device. In most cases itโ€™s better to use CUDA_VISIBLE_DEVICES environmental variable.

So I believe just use os.environ['CUDA_VISIBLE_DEVICES'] = '0' is better.

By the way, in train_opt.pkl there is also a GPU selection option, I believe it's not necessary.

Errors in train_opt.pkl

There are some errors in this option file.
checkpoints_dir='../checkpoints/' should be checkpoints_dir='./checkpoints/'
and
dataroot='../data/target/train' should be dataroot='./data/target/train'

Temporal smoothing

Hi,

It's a great work done on reproducing the paper. After reading your code, it seems that temporal smoothing has not been implemented. Am I right? thanks

About pose_model.pth!

please tell me where can download pose_model.pth with right link! I can't link google-drive! Thanks!

make_gif.py ==> 7gGRSy

make_gif.py

WARNING **: XXX : Error querying file info: Error when getting information for file โ€œ/XXX/EverybodyDanceNow_reproduce_pytorch/7gGRSyโ€: No such file or directory

Any help is appreciated!

ValueError: zero-size array to reduction operation minimum which has no identity

Hello,

I met the title problem in step 2 make_target.py, the 1918 image. I think the reason is pytorch version. I use pytorch 1.0.0. In line 98&99 of make_target.py

img[int(head_cord[1] - crop_size): int(head_cord[1] + crop_size), int(head_cord[0] - crop_size): int(head_cord[0] + crop_size), :]

the start index might be negative when the person is on the edge of images. So I add a constraint on it :

head = img[max(0, int(head_cord[1] - crop_size)): int(head_cord[1] + crop_size), max(0, int(head_cord[0] - crop_size)): int(head_cord[0] + crop_size), :]

This would guarantee the index at least be 0.

No module named openpose_utils

Traceback (most recent call last):
  File "make_source.py", line 1, in <module>
    from openpose_utils import remove_noise, get_pose
ImportError: No module named openpose_utils

IndexError and ValueError in the last step make_gif

ๆ‚จๅฅฝ๏ผๅฝ“ๆˆ‘ๅœจๆ‰ง่กŒๆœ€ๅŽไธ€ๆญฅๆ—ถ้‡ๅˆฐไบ†ๅฆ‚ไธ‹้—ฎ้ข˜๏ผš
image
ๅนถไธ”ๆ็คบ๏ผš
ValueError: Cannot save animation: no writers are available. Please install ffmpeg to save animations.
ไฝ†ๆ˜ฏๅฝ“ๆˆ‘ๅฎ‰่ฃ…ไบ† ffmpegๅ’ŒImagemagickๅŽ่ฟ˜ๆ˜ฏไผšๆŠฅ็›ธๅŒ็š„้”™่ฏฏ๏ผŒ่ฏท้—ฎๆœ‰ๆฒกๆœ‰ๅฅฝ็š„่งฃๅ†ณๅŠžๆณ•๏ผŸๅๅˆ†ๆ„Ÿ่ฐข๏ผ

About the face enhancer

Glad to see your work, especially the global pose normalization module. I've already added your project link in my repo home page. Please check here: Lotayou/everybody_dance_now_pytorch.

By the way, the face enhancer module looks familiar. I remember seeing these codes somewhere in another repo before but couldn't find the link in README.md. Do you mind adding a reference? Thanks>:D

A method to improve performance of make_target & make_source

Hi, when I use make_target.py and make_source.py to generate training label, I found the speed goes down. Recently I reviewed the code and noticed a issue in these two codes.

In line 99 & 100, take make_target.py as an example, it use matplotlib to draw a figure. However, it doesn't clear the figure so that the speed slows down to 20s/it at 10000s pic and crashs at 15000 pics.

There should be a plt.clf() after plt.savefig(). This will promise the speed won't drop :)

EOFError: Ran out of input

Traceback (most recent call last):
File "D:/python/files/EverybodyDanceNow_reproduce_pytorch/train_pose2vid.py", line 127, in
main()
File "D:/python/files/EverybodyDanceNow_reproduce_pytorch/train_pose2vid.py", line 49, in main
for i, data in enumerate(dataset, start=epoch_iter):
File "C:\Users\chenchao\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "C:\Users\chenchao\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
w.start()
File "C:\Users\chenchao\Anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\chenchao\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\chenchao\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\chenchao\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\Users\chenchao\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle module objects
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\chenchao\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\chenchao\Anaconda3\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

main.py ==> IsADirectoryError: [Errno 21] Is a directory: './data/face/test_real/ '

main.py

Traceback (most recent call last):
File "./face_enhancer/main.py", line 69, in
main(is_debug)
File "./face_enhancer/main.py", line 52, in main
image_folder = dataset.ImageFolderDataset(dataset_dir, cache=os.path.join(dataset_dir, 'local.db'))
File "/XXX/EverybodyDanceNow_reproduce_pytorch/face_enhancer/dataset.py", line 18, in init
tmp = imread(os.path.join(self.root, 'test_real', self.images[0]))
File "/XXX/skimage/io/_io.py", line 62, in imread
img = call_plugin('imread', fname, plugin=plugin, **plugin_args)
File "/XXX/skimage/io/manage_plugins.py", line 214, in call_plugin
return func(*args, **kwargs)
File "/XXX/skimage/io/_plugins/pil_plugin.py", line 35, in imread
with open(fname, 'rb') as f:
IsADirectoryError: [Errno 21] Is a directory: './data/face/test_real/ '

what caused?

python3 prepare.py
Prepare test_real....
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 617/618 [01:11<00:00, 7.49it/s]libpng warning: Image width is zero in IHDR
libpng warning: Image height is zero in IHDR
libpng error: Invalid IHDR data
libpng warning: Image width is zero in IHDR
libpng warning: Image height is zero in IHDR
libpng error: Invalid IHDR data
libpng warning: Image width is zero in IHDR
libpng warning: Image height is zero in IHDR
libpng error: Invalid IHDR data
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 618/618 [01:11<00:00, 8.65it/s]
Prepare test_sync....
CustomDatasetDataLoader
dataset [AlignedDataset] was created
GlobalGenerator(
(model): Sequential(
(0): ReflectionPad2d((3, 3, 3, 3))
(1): Conv2d(18, 64, kernel_size=(7, 7), stride=(1, 1))
(2): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace)
(4): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(5): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(6): ReLU(inplace)
(7): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(8): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(9): ReLU(inplace)
(10): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(11): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(12): ReLU(inplace)
(13): Conv2d(512, 1024, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(14): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(15): ReLU(inplace)
(16): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(17): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(18): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(19): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(20): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(21): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(22): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(23): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(24): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(25): ConvTranspose2d(1024, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))
(26): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(27): ReLU(inplace)
(28): ConvTranspose2d(512, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))
(29): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(30): ReLU(inplace)
(31): ConvTranspose2d(256, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))
(32): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(33): ReLU(inplace)
(34): ConvTranspose2d(128, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))
(35): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(36): ReLU(inplace)
(37): ReflectionPad2d((3, 3, 3, 3))
(38): Conv2d(64, 3, kernel_size=(7, 7), stride=(1, 1))
(39): Tanh()
)
)

0%| | 0/1235 [00:00<?, ?it/s]
../src/pix2pixHD/models/pix2pixHD_model.py:134: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
input_label = Variable(input_label, volatile=infer)
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 1234/1235 [02:54<00:00, 7.15it/s]Traceback (most recent call last):
File "prepare.py", line 64, in
for data in tqdm(dataset):
File "/usr/local/lib/python3.5/dist-packages/tqdm/_tqdm.py", line 937, in iter
for obj in iterable:
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 615, in next
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 615, in
batch = self.collate_fn([self.dataset[i] for i in indices])
File "../src/pix2pixHD/data/aligned_dataset.py", line 40, in getitem
A = Image.open(A_path)
File "/usr/local/lib/python3.5/dist-packages/PIL/Image.py", line 2622, in open
% (filename if filename else fp))
OSError: cannot identify image file '../data/target/test_label/00617.png'

Can we give back side human images also in training??

Hi. The code and explanation is of very nice. I am going through this repository. In source video you have taken only front view of the person. If we give back side of the person also, can it produce the result. Means, if i going one person rotating 360 degrees, can it able to give results?

Thanks in advance.

Regards,
SandhyaLaxmi Kanna

GPU under-utilisation

I have tried executing the code on 1060-6GB variant and Tesla K80 12GB GPU. The GPU is heavily under-utilized on both the GPU's. Can you suggest some workaround?

How to extract multi stick?

I'm going to extract multi stick and generate something

but this code is extract just one each part of body

I read all package of PoseEstimation code but I can't found

What should I do?

error in transfer.py

Hi, when I use transfer.py, I reported a mistake here.
0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
File "D:/GAN/everybody dance/pytorch/EverybodyDanceNow_reproduce_pytorch-1.0/transfer.py", line 35, in
for data in tqdm(dataset): # <torch.utils.data.dataloader.DataLoader object at 0x0000026E4176FFD0>
File "C:\Users\Jimi-iot\AppData\Local\Programs\Python\Python35\lib\site-packages\tqdm_tqdm.py", line 979, in iter
for obj in iterable:
File "C:\Users\Jimi-iot\AppData\Local\Programs\Python\Python35\lib\site-packages\torch\utils\data\dataloader.py", line 819, in iter
return _DataLoaderIter(self)
File "C:\Users\Jimi-iot\AppData\Local\Programs\Python\Python35\lib\site-packages\torch\utils\data\dataloader.py", line 560, in init
w.start()
File "C:\Users\Jimi-iot\AppData\Local\Programs\Python\Python35\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\Jimi-iot\AppData\Local\Programs\Python\Python35\lib\multiprocessing\context.py", line 212, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\Jimi-iot\AppData\Local\Programs\Python\Python35\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "C:\Users\Jimi-iot\AppData\Local\Programs\Python\Python35\lib\multiprocessing\popen_spawn_win32.py", line 66, in init
reduction.dump(process_obj, to_child)
File "C:\Users\Jimi-iot\AppData\Local\Programs\Python\Python35\lib\multiprocessing\reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'module'>: attribute lookup module on builtins failed

can you give me any suggestions? thx

cuDNN error: CUDNN_STATUS_INTERNAL_ERROR

Hello,

I tried to use your code but i got this error message in the very first step (make_source.py).

  0%|                                                 | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "make_source.py", line 80, in <module>
    paf, heatmap = get_outputs(multiplier, img, model, 'rtpose')
  File "src\PoseEstimation\evaluate\coco_eval.py", line 137, in get_outputs
    predicted_outputs, _ = model(batch_var)
  File "C:\Users\Cedric\Anaconda3\lib\site-packages\torch\nn\modules\module.py",
 line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\Cedric\Anaconda3\lib\site-packages\torch\nn\parallel\data_paral
lel.py", line 141, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "C:\Users\Cedric\Anaconda3\lib\site-packages\torch\nn\modules\module.py",
 line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "src\PoseEstimation\network\rtpose_vgg.py", line 161, in forward
    out1 = self.model0(x)
  File "C:\Users\Cedric\Anaconda3\lib\site-packages\torch\nn\modules\module.py",
 line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\Cedric\Anaconda3\lib\site-packages\torch\nn\modules\container.p
y", line 92, in forward
    input = module(input)
  File "C:\Users\Cedric\Anaconda3\lib\site-packages\torch\nn\modules\module.py",
 line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\Cedric\Anaconda3\lib\site-packages\torch\nn\modules\conv.py", l
ine 320, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR

I'm on windows 8, with cuda 10 pytorch 1.0 and cudnn v7.4.2. I tried to use pytorch 0.4.1 to see, but same error. I changed the nbThread to 0 as you precise.

Do you have any idea ?

create web directory ./checkpoints/target/web... End of epoch 1 / 40 Time Taken: 898 sec Traceback (most recent call last): File "train_pose2vid.py", line 127, in <module> main() File "train_pose2vid.py", line 84, in main errors = {k: v.data[0] if not isinstance(v, int) else v for k, v in loss_dict.items()} File "train_pose2vid.py", line 84, in <dictcomp> errors = {k: v.data[0] if not isinstance(v, int) else v for k, v in loss_dict.items()} IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number

create web directory ./checkpoints/target/web...
End of epoch 1 / 40 Time Taken: 898 sec
Traceback (most recent call last):
File "train_pose2vid.py", line 127, in
main()
File "train_pose2vid.py", line 84, in main
errors = {k: v.data[0] if not isinstance(v, int) else v for k, v in loss_dict.items()}
File "train_pose2vid.py", line 84, in
errors = {k: v.data[0] if not isinstance(v, int) else v for k, v in loss_dict.items()}
IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number

Is this a typo in normalization.py?

line14~line17:

 target_img = cv2.imread('./data/target/train/train_label/00001.png')[:,:,0]
 target_img_rgb = cv2.imread('./data/target/train/train_img/00001.png')
 source_img = cv2.imread('./data/target/train/train_label/00001.png')[:,:,0]
 source_img_rgb = cv2.imread('./data/target/train/train_img/00001.png')

I think source images and target images are not supposed to be the same.
Am I right?

There are too many path setting in errors with face_enhancer parts lead to everything cannot be run...

There are too many path setting in errors with face_enhancer parts lead to everything cannot be run...

The locations of images and path names are also chaotic very much ...

like: prepare.py

face_sync_dir = Path('../data/face/ ')
face_sync_dir.mkdir(exist_ok=True)
test_sync_dir = Path('../data/face/test_sync/ ')
test_sync_dir.mkdir(exist_ok=True)
test_real_dir = Path('../data/face/test_real/ ')
test_real_dir.mkdir(exist_ok=True)
test_img = Path('../data/target/images/ ')
test_img.mkdir(exist_ok=True)
test_label = Path('../data/target/test_label/ ')
test_label.mkdir(exist_ok=True)

import src.config.test_opt as opt

and main.py

image_folder = dataset.ImageFolderDataset(dataset_dir, cache=os.path.join(dataset_dir, 'local.db'))

and

self.images = sorted(os.listdir(os.path.join(root, 'test_real')))

and

self.root, self.images, self.size = pickle.load(f)

...

Any help is appreciated!

Animation size has reached 20989202 bytes, exceeding the limit of 20971520.0. If you're sure you want a larger animation embedded, set the animation.embed_limit rc parameter to a larger value (in MB). This and further frames will be dropped.

Animation size has reached 20989202 bytes, exceeding the limit of 20971520.0. If you're sure you want a larger animation embedded, set the animation.embed_limit rc parameter to a larger value (in MB). This and further frames will be dropped.
Traceback (most recent call last):
File "make_gif.py", line 60, in
js_anim = HTML(anim.to_jshtml())
File "/home/home1/yqr/anaconda3/envs/pytorch/lib/python3.6/site-packages/matplotlib/animation.py", line 1398, in to_jshtml
self.save(str(path), writer=writer)
File "/home/home1/yqr/anaconda3/envs/pytorch/lib/python3.6/site-packages/matplotlib/animation.py", line 1175, in save
anim._draw_next_frame(d, blit=False)
File "/home/home1/yqr/anaconda3/envs/pytorch/lib/python3.6/site-packages/matplotlib/animation.py", line 1212, in _draw_next_frame
self._draw_frame(framedata)
File "/home/home1/yqr/anaconda3/envs/pytorch/lib/python3.6/site-packages/matplotlib/animation.py", line 1767, in _draw_frame
self._drawn_artists = self._func(framedata, *self._args)
File "make_gif.py", line 42, in animate
target_synth = io.imread(target_synth_paths[nframe])
IndexError: list index out of range

Anybody meet the same problem?

a typeError in step3

hi, i have a problem there.

when i run step3(train_pose2vid.py),an error has been encountered ๏ผš

TypeError: tensor(1.3251, device='cuda:0') has type <class 'torch.Tensor'>, but expected one of: ((<class 'numbers.Real'>,),) for field Value.simple_value

what can i slove this error?

Does this have anything to do with the version of tensorflow๏ผŸ

thanks

EOFError: Ran out of input

python make_source.py
Has generated 0 picetures
Has generated 100 picetures
Has generated 200 picetures
Bulding VGG19
src/PoseEstimation/network/rtpose_vgg.py:204: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(m.weight, std=0.01)
src/PoseEstimation/network/rtpose_vgg.py:206: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
init.constant(m.bias, 0.0)
src/PoseEstimation/network/rtpose_vgg.py:209: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(self.model1_1[8].weight, std=0.01)
src/PoseEstimation/network/rtpose_vgg.py:210: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(self.model1_2[8].weight, std=0.01)
src/PoseEstimation/network/rtpose_vgg.py:212: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(self.model2_1[12].weight, std=0.01)
src/PoseEstimation/network/rtpose_vgg.py:213: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(self.model3_1[12].weight, std=0.01)
src/PoseEstimation/network/rtpose_vgg.py:214: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(self.model4_1[12].weight, std=0.01)
src/PoseEstimation/network/rtpose_vgg.py:215: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(self.model5_1[12].weight, std=0.01)
src/PoseEstimation/network/rtpose_vgg.py:216: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(self.model6_1[12].weight, std=0.01)
src/PoseEstimation/network/rtpose_vgg.py:218: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(self.model2_2[12].weight, std=0.01)
src/PoseEstimation/network/rtpose_vgg.py:219: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(self.model3_2[12].weight, std=0.01)
src/PoseEstimation/network/rtpose_vgg.py:220: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(self.model4_2[12].weight, std=0.01)
src/PoseEstimation/network/rtpose_vgg.py:221: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(self.model5_2[12].weight, std=0.01)
src/PoseEstimation/network/rtpose_vgg.py:222: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(self.model6_2[12].weight, std=0.01)
Traceback (most recent call last):
File "make_source.py", line 54, in
model.load_state_dict(torch.load(weight_name))
File "/home/zhunan314/.local/lib/python3.6/site-packages/torch/serialization.py", line 303, in load
return _load(f, map_location, pickle_module)
File "/home/zhunan314/.local/lib/python3.6/site-packages/torch/serialization.py", line 459, in _load
magic_number = pickle_module.load(f)
EOFError: Ran out of input

unable to download required files

Not able to download required files. Looks like the files have been deleted. Also for users outside china, it is impossible to download as signup requires chinese phone number

image_folder = dataset.ImageFolderDataset(dataset_dir, cache=os.path.join(dataset_dir, 'local.db'))

Traceback (most recent call last):
File "main.py", line 68, in
main(is_debug)
File "main.py", line 51, in main
image_folder = dataset.ImageFolderDataset(dataset_dir, cache=os.path.join(dataset_dir, 'local.db'))
File "XXX/EverybodyDanceNow_reproduce_pytorch/face_enhancer/dataset.py", line 18, in init
tmp = imread(os.path.join('/XXX/EverybodyDanceNow_reproduce_pytorch/data/face', 'test_real', self.images[0]))
File "/XXX/skimage/io/_io.py", line 62, in imread
img = call_plugin('imread', fname, plugin=plugin, **plugin_args)
File "/XXX/skimage/io/manage_plugins.py", line 214, in call_plugin
return func(*args, **kwargs)
File "/XXX/skimage/io/_plugins/pil_plugin.py", line 35, in imread
with open(fname, 'rb') as f:
IsADirectoryError: [Errno 21] Is a directory: '/XXX/EverybodyDanceNow_reproduce_pytorch/data/face/test_real/ '

Any help is appreciated!

_pickle.UnpicklingError: invalid load key, '\x0a'. python train_pose2vid.py

when I ran python python train_pose2vid.py under the env: Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

import torch
torch.version
'0.4.1'

I got the following error:

Traceback (most recent call last):
File "train_pose2vid.py", line 130, in
main()
File "train_pose2vid.py", line 44, in main
model = create_model(opt)
File "/data2/project_dancing_score/EverybodyDanceNow_reproduce_pytorch/src/pix2pixHD/models/models.py", line 15, in create_model
model.initialize(opt)
File "/data2//project_dancing_score/EverybodyDanceNow_reproduce_pytorch/src/pix2pixHD/models/pix2pixHD_model.py", line 83, in initialize
self.criterionVGG = networks.VGGLoss(self.gpu_ids)
File "/data2//project_dancing_score/EverybodyDanceNow_reproduce_pytorch/src/pix2pixHD/models/networks.py", line 118, in init
self.vgg = Vgg19().cuda()
File "/data2//project_dancing_score/EverybodyDanceNow_reproduce_pytorch/src/pix2pixHD/models/networks.py", line 392, in init
vgg_pretrained_features = models.vgg19(pretrained=True).features
File "/data//anaconda3/envs/py36t41/lib/python3.6/site-packages/torchvision/models/vgg.py", line 180, in vgg19
model.load_state_dict(model_zoo.load_url(model_urls['vgg19']))
File "/data//anaconda3/envs/py36t41/lib/python3.6/site-packages/torch/utils/model_zoo.py", line 66, in load_url
return torch.load(cached_file, map_location=map_location)
File "/data//anaconda3/envs/py36t41/lib/python3.6/site-packages/torch/serialization.py", line 358, in load
return _load(f, map_location, pickle_module)
File "/data/anaconda3/envs/py36t41/lib/python3.6/site-packages/torch/serialization.py", line 532, in _load
magic_number = pickle_module.load(f)
_pickle.UnpicklingError: invalid load key, '\x0a'.

a possible bug in normlization.py

In normlization.py line 77
source_cord_y, source_cord_x = pose_cord[0]

Are you sure about using the same pose coordination cross all test image ?
I think it should be
source_cord_y, source_cord_x = pose_cord[img_idx] ?

RuntimeError: CUDA error: out of memory

Windows 10 Env

Python 3.6.5

run:
python train_pose2vid.py

I got:

Traceback (most recent call last):
File "train_pose2vid.py", line 127, in
main()
File "train_pose2vid.py", line 59, in main
Variable(data['image']), Variable(data['feat']), infer=save_fake)
File "d:\Anaconda3\envs\py365\lib\site-packages\torch\nn\modules\module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "D:\githubs\EverybodyDanceNow_reproduce_pytorch\src\pix2pixHD\models\pix2pixHD_model.py", line 195, in forward
loss_G_VGG = self.criterionVGG(fake_image, real_image) * self.opt.lambda_feat
File "d:\Anaconda3\envs\py365\lib\site-packages\torch\nn\modules\module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "D:\githubs\EverybodyDanceNow_reproduce_pytorch\src\pix2pixHD\models\networks.py", line 123, in forward
x_vgg, y_vgg = self.vgg(x), self.vgg(y)
File "d:\Anaconda3\envs\py365\lib\site-packages\torch\nn\modules\module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "D:\githubs\EverybodyDanceNow_reproduce_pytorch\src\pix2pixHD\models\networks.py", line 417, in forward
h_relu5 = self.slice5(h_relu4)
File "d:\Anaconda3\envs\py365\lib\site-packages\torch\nn\modules\module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "d:\Anaconda3\envs\py365\lib\site-packages\torch\nn\modules\container.py", line 91, in forward
input = module(input)
File "d:\Anaconda3\envs\py365\lib\site-packages\torch\nn\modules\module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "d:\Anaconda3\envs\py365\lib\site-packages\torch\nn\modules\conv.py", line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA error: out of memory

Any help is appreciated!

About the face enhancer

In enhance.py, there're two directories that haven't appeared in the previous steps. One is dataset_dir = '../data/face_yx_fang'. I just copy results/target/test_latest/images to this directory, which are the fake images generated in the previous step by running transfer.py.
Another one is ckpt_dir = '../checkpoints/yxu_face', which I think is the same as ckpt_dir = '../checkpoints/face'. But there's still something wrong with the final video with face enhancement. It seems that the head position from pose_source_norm.npy is wrong. Does anyone have any idea with this?

Below are some sample frames from my current result.
00000
00532
00842

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.