GithubHelp home page GithubHelp logo

pulse's People

Contributors

adamian98 avatar cclauss avatar georgzoeller avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pulse's Issues

about i18n

Need a Chinese README.md?
If needed, I can creat a pull requests.😋

AssertionError

Loading Synthesis Network
Traceback (most recent call last):
File "./run.py", line 79, in
for j,(HR,LR) in enumerate(model(ref_im,**kwargs)):
File "/usr/local/src/pulse/PULSE.py", line 126, in forward
loss_builder = LossBuilder(ref_im, loss_str, eps).cuda()
File "/usr/local/src/pulse/loss.py", line 7, in init
assert ref_im.shape[2]==ref_im.shape[3]
AssertionError

'no attribute ndim'

I was able to download the three files required by align_face.py and PULSE.py, then I was able to downsample 3 images with no problem, but when I try to run "run.py", I get an error:

Loading Synthesis Network
Downloading https://drive.google.com/uc?id=1xKDEzjfGPa1s1iDYucH4eSxLvl1MuLE1 ... done
Traceback (most recent call last):
File "run.py", line 79, in
for j,(HR,LR) in enumerate(model(ref_im,**kwargs)):
File "/home/pyimagesearch/Downloads/pulse-master/PULSE.py", line 114, in forward
opt = SphericalOptimizer(opt_func, var_list, lr=learning_rate)
File "/home/pyimagesearch/Downloads/pulse-master/SphericalOptimizer.py", line 17, in init
self.radii = {param: (param.pow(2).sum(tuple(range(2,param.ndim)),keepdim=True)+1e-9).sqrt() for param in params}
File "/home/pyimagesearch/Downloads/pulse-master/SphericalOptimizer.py", line 17, in
self.radii = {param: (param.pow(2).sum(tuple(range(2,param.ndim)),keepdim=True)+1e-9).sqrt() for param in params}
AttributeError: 'Tensor' object has no attribute 'ndim'


I´m using a Jetson Nano, with CUDA and all the modules and dependences required by the program.

Some tip?

About the learning rete schedule

Great tks for your excellent work.

I have a question about the 'linear1cycledrop' that why you use schedule like this since it makes learning rate start at very low and get peak at the steps/2 and then linearly get down to the nearly end and linearly drop with another rate.

Is this a great trick in this task : -) ?

Train pulse on our own dataset.

Hi,

Thanks for great work. Can you explain me a bit how I can train pulse on my own dataset? Which one of the files is training code? for some reason, I couldn't figure out. Thanks.

Gaussian mean face looks bad

Thanks for sharing the code, great job! Here is my question:

Have you tested the Gaussian mean face. It looks quite bad from my side. Is there a huge gap between this version of style GAN and the original version of tensorflow?

_pickle.UnpicklingError: invalid load key, '\x01'.

Loading Synthesis Network
Traceback (most recent call last):
File "run.py", line 58, in
model = PULSE(cache_dir=kwargs["cache_dir"])
File "D:\pulse\pulse\PULSE.py", line 24, in init
self.synthesis.load_state_dict(torch.load(f))
File "D:\ProgramData\Anaconda3\lib\site-packages\torch\serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "D:\ProgramData\Anaconda3\lib\site-packages\torch\serialization.py", line 763, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '\x01'.

What should I do about it?
thx ..

StyleGAN Models

Will this work with a custom trained StyleGAN model using NVidia's StyleGAN repo? Will StyleGAN2 work?

Google Drive quota exceeded when running align_face

Cannot run align_face.

(pulse) root@e2bbc3333453:/home/pulse# python align_face.py
Downloading Shape Predictor
Downloading https://drive.google.com/uc?id=1huhv8PYpNNKbGCLOaYUjOgR1pY5pmbJx ............ failed
Traceback (most recent call last):
  File "align_face.py", line 34, in <module>
    f=open_url("https://drive.google.com/uc?id=1huhv8PYpNNKbGCLOaYUjOgR1pY5pmbJx", cache_dir=cache_dir, return_path=True)
  File "/home/pulse/drive.py", line 66, in open_url
    raise IOError("Google Drive quota exceeded")
OSError: Google Drive quota exceeded

Probably sharing a new link would fix this.

About another matter, maybe it would be best to warn people to put their input images as png. It took me a while to understand why my images weren't processed when runnin "python run.py'.

pre-trained model

I can’t download the pre-trained model here. How do I download it?

Can I transfer my own stylegan model?

I had trained my own stylegan model, and convert the pkl file to pt format by using stylegan-pytorch project.
But when I run PULSE, it come up with some errors as follow:

Loading Synthesis Network
Traceback (most recent call last):
File "run.py", line 58, in
model = PULSE(cache_dir=kwargs["cache_dir"])
File "H:\Pulse\PULSE.py", line 25, in init
self.synthesis.load_state_dict(torch.load("./models/karras2019stylegan-ffhq-1024x1024.for_g_all.pt"))
File "D:\Anaconda3\envs\pulse\lib\site-packages\torch\nn\modules\module.py", line 846, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for G_synthesis:
Missing key(s) in state_dict: "torgb.weight", "torgb.bias", "blocks.4x4.const", "blocks.4x4.bias", "blocks.4x4.epi1.noise.weight", "blocks.4x4.epi1.style_mod.lin.weight", "blocks.4x4.epi1.style_mod.lin.bias", "blocks.4x4.conv.weight", "blocks.4x4.conv.bias", "blocks.4x4.epi2.noise.weight", "blocks.4x4.epi2.style_mod.lin.weight", "blocks.4x4.epi2.style_mod.lin.bias", "blocks.8x8.conv0_up.weight", "blocks.8x8.conv0_up.bias", "blocks.8x8.conv0_up.intermediate.kernel", "blocks.8x8.epi1.noise.weight", "blocks.8x8.epi1.style_mod.lin.weight", "blocks.8x8.epi1.style_mod.lin.bias", "blocks.8x8.conv1.weight", "blocks.8x8.conv1.bias", "blocks.8x8.epi2.noise.weight", "blocks.8x8.epi2.style_mod.lin.weight", "blocks.8x8.epi2.style_mod.lin.bias", "blocks.16x16.conv0_up.weight", "blocks.16x16.conv0_up.bias", "blocks.16x16.conv0_up.intermediate.kernel", "blocks.16x16.epi1.noise.weight", "blocks.16x16.epi1.style_mod.lin.weight", "blocks.16x16.epi1.style_mod.lin.bias", "blocks.16x16.conv1.weight", "blocks.16x16.conv1.bias", "blocks.16x16.epi2.noise.weight", "blocks.16x16.epi2.style_mod.lin.weight", "blocks.16x16.epi2.style_mod.lin.bias", "blocks.32x32.conv0_up.weight", "blocks.32x32.conv0_up.bias", "blocks.32x32.conv0_up.intermediate.kernel", "blocks.32x32.epi1.noise.weight", "blocks.32x32.epi1.style_mod.lin.weight", "blocks.32x32.epi1.style_mod.lin.bias", "blocks.32x32.conv1.weight", "blocks.32x32.conv1.bias", "blocks.32x32.epi2.noise.weight", "blocks.32x32.epi2.style_mod.lin.weight", "blocks.32x32.epi2.style_mod.lin.bias", "blocks.64x64.conv0_up.weight", "blocks.64x64.conv0_up.bias", "blocks.64x64.conv0_up.intermediate.kernel", "blocks.64x64.epi1.noise.weight", "blocks.64x64.epi1.style_mod.lin.weight", "blocks.64x64.epi1.style_mod.lin.bias", "blocks.64x64.conv1.weight", "blocks.64x64.conv1.bias", "blocks.64x64.epi2.noise.weight", "blocks.64x64.epi2.style_mod.lin.weight", "blocks.64x64.epi2.style_mod.lin.bias", "blocks.128x128.conv0_up.weight", "blocks.128x128.conv0_up.bias", "blocks.128x128.conv0_up.intermediate.kernel", "blocks.128x128.epi1.noise.weight", "blocks.128x128.epi1.style_mod.lin.weight", "blocks.128x128.epi1.style_mod.lin.bias", "blocks.128x128.conv1.weight", "blocks.128x128.conv1.bias", "blocks.128x128.epi2.noise.weight", "blocks.128x128.epi2.style_mod.lin.weight", "blocks.128x128.epi2.style_mod.lin.bias", "blocks.256x256.conv0_up.weight", "blocks.256x256.conv0_up.bias", "blocks.256x256.conv0_up.intermediate.kernel", "blocks.256x256.epi1.noise.weight", "blocks.256x256.epi1.style_mod.lin.weight", "blocks.256x256.epi1.style_mod.lin.bias", "blocks.256x256.conv1.weight", "blocks.256x256.conv1.bias", "blocks.256x256.epi2.noise.weight", "blocks.256x256.epi2.style_mod.lin.weight", "blocks.256x256.epi2.style_mod.lin.bias", "blocks.512x512.conv0_up.weight", "blocks.512x512.conv0_up.bias", "blocks.512x512.conv0_up.intermediate.kernel", "blocks.512x512.epi1.noise.weight", "blocks.512x512.epi1.style_mod.lin.weight", "blocks.512x512.epi1.style_mod.lin.bias", "blocks.512x512.conv1.weight", "blocks.512x512.conv1.bias", "blocks.512x512.epi2.noise.weight", "blocks.512x512.epi2.style_mod.lin.weight", "blocks.512x512.epi2.style_mod.lin.bias", "blocks.1024x1024.conv0_up.weight", "blocks.1024x1024.conv0_up.bias", "blocks.1024x1024.conv0_up.intermediate.kernel", "blocks.1024x1024.epi1.noise.weight", "blocks.1024x1024.epi1.style_mod.lin.weight", "blocks.1024x1024.epi1.style_mod.lin.bias", "blocks.1024x1024.conv1.weight", "blocks.1024x1024.conv1.bias", "blocks.1024x1024.epi2.noise.weight", "blocks.1024x1024.epi2.style_mod.lin.weight", "blocks.1024x1024.epi2.style_mod.lin.bias".
Unexpected key(s) in state_dict: "g_mapping.dense0.weight", "g_mapping.dense0.bias", "g_mapping.dense1.weight", "g_mapping.dense1.bias", "g_mapping.dense2.weight", "g_mapping.dense2.bias", "g_mapping.dense3.weight", "g_mapping.dense3.bias", "g_mapping.dense4.weight", "g_mapping.dense4.bias", "g_mapping.dense5.weight", "g_mapping.dense5.bias", "g_mapping.dense6.weight", "g_mapping.dense6.bias", "g_mapping.dense7.weight", "g_mapping.dense7.bias", "g_synthesis.torgb.weight", "g_synthesis.torgb.bias", "g_synthesis.blocks.4x4.const", "g_synthesis.blocks.4x4.bias", "g_synthesis.blocks.4x4.epi1.top_epi.noise.weight", "g_synthesis.blocks.4x4.epi1.style_mod.lin.weight", "g_synthesis.blocks.4x4.epi1.style_mod.lin.bias", "g_synthesis.blocks.4x4.conv.weight", "g_synthesis.blocks.4x4.conv.bias", "g_synthesis.blocks.4x4.epi2.top_epi.noise.weight", "g_synthesis.blocks.4x4.epi2.style_mod.lin.weight", "g_synthesis.blocks.4x4.epi2.style_mod.lin.bias", "g_synthesis.blocks.8x8.conv0_up.weight", "g_synthesis.blocks.8x8.conv0_up.bias", "g_synthesis.blocks.8x8.conv0_up.intermediate.kernel", "g_synthesis.blocks.8x8.epi1.top_epi.noise.weight", "g_synthesis.blocks.8x8.epi1.style_mod.lin.weight", "g_synthesis.blocks.8x8.epi1.style_mod.lin.bias", "g_synthesis.blocks.8x8.conv1.weight", "g_synthesis.blocks.8x8.conv1.bias", "g_synthesis.blocks.8x8.epi2.top_epi.noise.weight", "g_synthesis.blocks.8x8.epi2.style_mod.lin.weight", "g_synthesis.blocks.8x8.epi2.style_mod.lin.bias", "g_synthesis.blocks.16x16.conv0_up.weight", "g_synthesis.blocks.16x16.conv0_up.bias", "g_synthesis.blocks.16x16.conv0_up.intermediate.kernel", "g_synthesis.blocks.16x16.epi1.top_epi.noise.weight", "g_synthesis.blocks.16x16.epi1.style_mod.lin.weight", "g_synthesis.blocks.16x16.epi1.style_mod.lin.bias", "g_synthesis.blocks.16x16.conv1.weight", "g_synthesis.blocks.16x16.conv1.bias", "g_synthesis.blocks.16x16.epi2.top_epi.noise.weight", "g_synthesis.blocks.16x16.epi2.style_mod.lin.weight", "g_synthesis.blocks.16x16.epi2.style_mod.lin.bias", "g_synthesis.blocks.32x32.conv0_up.weight", "g_synthesis.blocks.32x32.conv0_up.bias", "g_synthesis.blocks.32x32.conv0_up.intermediate.kernel", "g_synthesis.blocks.32x32.epi1.top_epi.noise.weight", "g_synthesis.blocks.32x32.epi1.style_mod.lin.weight", "g_synthesis.blocks.32x32.epi1.style_mod.lin.bias", "g_synthesis.blocks.32x32.conv1.weight", "g_synthesis.blocks.32x32.conv1.bias", "g_synthesis.blocks.32x32.epi2.top_epi.noise.weight", "g_synthesis.blocks.32x32.epi2.style_mod.lin.weight", "g_synthesis.blocks.32x32.epi2.style_mod.lin.bias", "g_synthesis.blocks.64x64.conv0_up.weight", "g_synthesis.blocks.64x64.conv0_up.bias", "g_synthesis.blocks.64x64.conv0_up.intermediate.kernel", "g_synthesis.blocks.64x64.epi1.top_epi.noise.weight", "g_synthesis.blocks.64x64.epi1.style_mod.lin.weight", "g_synthesis.blocks.64x64.epi1.style_mod.lin.bias", "g_synthesis.blocks.64x64.conv1.weight", "g_synthesis.blocks.64x64.conv1.bias", "g_synthesis.blocks.64x64.epi2.top_epi.noise.weight", "g_synthesis.blocks.64x64.epi2.style_mod.lin.weight", "g_synthesis.blocks.64x64.epi2.style_mod.lin.bias", "g_synthesis.blocks.128x128.conv0_up.weight", "g_synthesis.blocks.128x128.conv0_up.bias", "g_synthesis.blocks.128x128.conv0_up.intermediate.kernel", "g_synthesis.blocks.128x128.epi1.top_epi.noise.weight", "g_synthesis.blocks.128x128.epi1.style_mod.lin.weight", "g_synthesis.blocks.128x128.epi1.style_mod.lin.bias", "g_synthesis.blocks.128x128.conv1.weight", "g_synthesis.blocks.128x128.conv1.bias", "g_synthesis.blocks.128x128.epi2.top_epi.noise.weight", "g_synthesis.blocks.128x128.epi2.style_mod.lin.weight", "g_synthesis.blocks.128x128.epi2.style_mod.lin.bias", "g_synthesis.blocks.256x256.conv0_up.weight", "g_synthesis.blocks.256x256.conv0_up.bias", "g_synthesis.blocks.256x256.conv0_up.intermediate.kernel", "g_synthesis.blocks.256x256.epi1.top_epi.noise.weight", "g_synthesis.blocks.256x256.epi1.style_mod.lin.weight", "g_synthesis.blocks.256x256.epi1.style_mod.lin.bias", "g_synthesis.blocks.256x256.conv1.weight", "g_synthesis.blocks.256x256.conv1.bias", "g_synthesis.blocks.256x256.epi2.top_epi.noise.weight", "g_synthesis.blocks.256x256.epi2.style_mod.lin.weight", "g_synthesis.blocks.256x256.epi2.style_mod.lin.bias", "g_synthesis.blocks.512x512.conv0_up.weight", "g_synthesis.blocks.512x512.conv0_up.bias", "g_synthesis.blocks.512x512.conv0_up.intermediate.kernel", "g_synthesis.blocks.512x512.epi1.top_epi.noise.weight", "g_synthesis.blocks.512x512.epi1.style_mod.lin.weight", "g_synthesis.blocks.512x512.epi1.style_mod.lin.bias", "g_synthesis.blocks.512x512.conv1.weight", "g_synthesis.blocks.512x512.conv1.bias", "g_synthesis.blocks.512x512.epi2.top_epi.noise.weight", "g_synthesis.blocks.512x512.epi2.style_mod.lin.weight", "g_synthesis.blocks.512x512.epi2.style_mod.lin.bias", "g_synthesis.blocks.1024x1024.conv0_up.weight", "g_synthesis.blocks.1024x1024.conv0_up.bias", "g_synthesis.blocks.1024x1024.conv0_up.intermediate.kernel", "g_synthesis.blocks.1024x1024.epi1.top_epi.noise.weight", "g_synthesis.blocks.1024x1024.epi1.style_mod.lin.weight", "g_synthesis.blocks.1024x1024.epi1.style_mod.lin.bias", "g_synthesis.blocks.1024x1024.conv1.weight", "g_synthesis.blocks.1024x1024.conv1.bias", "g_synthesis.blocks.1024x1024.epi2.top_epi.noise.weight", "g_synthesis.blocks.1024x1024.epi2.style_mod.lin.weight", "g_synthesis.blocks.1024x1024.epi2.style_mod.lin.bias".

So, is there any way that I can make it right?
FULLY RESPECT.

MAC OX CUDA issuse

Platform: macbook pro 16in
AssertionError: Torch not compiled with CUDA enabled

image

problem creating a Conda environment from the provided YAML

conda env create -f environment.yml

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/conda/exceptions.py", line 1079, in __call__
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/conda_env/cli/main.py", line 80, in do_call
    exit_code = getattr(module, func_name)(args, parser)
  File "/usr/local/lib/python3.7/site-packages/conda_env/cli/main_create.py", line 80, in execute
    directory=os.getcwd())
  File "/usr/local/lib/python3.7/site-packages/conda_env/specs/__init__.py", line 40, in detect
    if spec.can_handle():
  File "/usr/local/lib/python3.7/site-packages/conda_env/specs/yaml_file.py", line 18, in can_handle
    self._environment = env.from_file(self.filename)
  File "/usr/local/lib/python3.7/site-packages/conda_env/env.py", line 151, in from_file
    return from_yaml(yamlstr, filename=filename)
  File "/usr/local/lib/python3.7/site-packages/conda_env/env.py", line 136, in from_yaml
    data = yaml_load_standard(yamlstr)
  File "/usr/local/lib/python3.7/site-packages/conda/common/serialize.py", line 76, in yaml_load_standard
    return yaml.load(string, Loader=yaml.Loader, version="1.2")
  File "/usr/local/lib/python3.7/site-packages/ruamel_yaml/main.py", line 935, in load
    return loader._constructor.get_single_data()
  File "/usr/local/lib/python3.7/site-packages/ruamel_yaml/constructor.py", line 109, in get_single_data
    node = self.composer.get_single_node()
  File "/usr/local/lib/python3.7/site-packages/ruamel_yaml/composer.py", line 78, in get_single_node
    document = self.compose_document()
  File "/usr/local/lib/python3.7/site-packages/ruamel_yaml/composer.py", line 104, in compose_document
    self.parser.get_event()
  File "/usr/local/lib/python3.7/site-packages/ruamel_yaml/parser.py", line 163, in get_event
    self.current_event = self.state()
  File "/usr/local/lib/python3.7/site-packages/ruamel_yaml/parser.py", line 239, in parse_document_end
    token = self.scanner.peek_token()
  File "/usr/local/lib/python3.7/site-packages/ruamel_yaml/scanner.py", line 182, in peek_token
    self.fetch_more_tokens()
  File "/usr/local/lib/python3.7/site-packages/ruamel_yaml/scanner.py", line 282, in fetch_more_tokens
    return self.fetch_value()
  File "/usr/local/lib/python3.7/site-packages/ruamel_yaml/scanner.py", line 655, in fetch_value
    self.reader.get_mark(),
ruamel_yaml.scanner.ScannerError: mapping values are not allowed here
  in "<unicode string>", line 32, column 44:
     ... ame="description" content="PULSE: Self-Supervised Photo Upsampli ... 
                                         ^ (line: 32)

$ /usr/local/bin/conda-env create -f environment.yml

environment variables:
CIO_TEST=
CONDA_AUTO_UPDATE_CONDA=false
CONDA_ROOT=/usr/local
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
LIBRARY_PATH=/usr/local/cuda/lib64/stubs
PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/b
in:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-
sdk/bin:/opt/bin
PYTHONPATH=/env/python
PYTHONWARNINGS=ignore:::pip._internal.cli.base_command
REQUESTS_CA_BUNDLE=
SSL_CERT_FILE=

 active environment : None
   user config file : /root/.condarc

populated config files : /root/.condarc
conda version : 4.8.2
conda-build version : not installed
python version : 3.7.6.final.0
virtual packages : __glibc=2.27
base environment : /usr/local (writable)
channel URLs : https://conda.anaconda.org/conda-forge/linux-64
https://conda.anaconda.org/conda-forge/noarch
https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /usr/local/pkgs
/root/.conda/pkgs
envs directories : /usr/local/envs
/root/.conda/envs
platform : linux-64
user-agent : conda/4.8.2 requests/2.22.0 CPython/3.7.6 Linux/4.19.104+ ubuntu/18.04.3 glibc/2.27
UID:GID : 0:0
netrc file : None
offline mode : False

An unexpected error has occurred. Conda has prepared the above report.

Error

When i do "conda env create -n pulse -f pulse.yml"
i get this very long error: Ran pip subprocess with arguments:
['D:\Program Files\Anaconda\envs\pulse\python.exe', '-m', 'pip', 'install', '-U', '-r', 'C:\Users\gijse\Downloads\PULSE\pulse-master\condaenv.93ram30n.requirements.txt']
Pip subprocess output:
Collecting dlib==19.19.0
Downloading dlib-19.19.0.tar.gz (3.2 MB)
Building wheels for collected packages: dlib
Building wheel for dlib (setup.py): started
Building wheel for dlib (setup.py): finished with status 'error'
Running setup.py clean for dlib
Failed to build dlib
Installing collected packages: dlib
Running setup.py install for dlib: started
Running setup.py install for dlib: finished with status 'error'

Pip subprocess error:
ERROR: Command errored out with exit status 1:
command: 'D:\Program Files\Anaconda\envs\pulse\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py'"'"'; file='"'"'C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\gijse\AppData\Local\Temp\pip-wheel-i0g75ms_'
cwd: C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib
Complete output (53 lines):
running bdist_wheel
running build
running build_py
package init file 'dlib_init_.py' not found (or not a regular file)
running build_ext
Traceback (most recent call last):
File "C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py", line 120, in get_cmake_version
out = subprocess.check_output(['cmake', '--version'])
File "D:\Program Files\Anaconda\envs\pulse\lib\subprocess.py", line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "D:\Program Files\Anaconda\envs\pulse\lib\subprocess.py", line 489, in run
with Popen(*popenargs, **kwargs) as process:
File "D:\Program Files\Anaconda\envs\pulse\lib\subprocess.py", line 854, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "D:\Program Files\Anaconda\envs\pulse\lib\subprocess.py", line 1307, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py", line 223, in
setup(
File "D:\Program Files\Anaconda\envs\pulse\lib\site-packages\setuptools_init_.py", line 161, in setup
return distutils.core.setup(attrs)
File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "D:\Program Files\Anaconda\envs\pulse\lib\site-packages\wheel\bdist_wheel.py", line 223, in run
self.run_command('build')
File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\command\build.py", line 135, in run
self.run_command(cmd_name)
File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py", line 129, in run
cmake_version = self.get_cmake_version()
File "C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py", line 122, in get_cmake_version
raise RuntimeError("\n
*****************************************************************\n" +
RuntimeError:


CMake must be installed to build the following extensions: dlib



ERROR: Failed building wheel for dlib
ERROR: Command errored out with exit status 1:
command: 'D:\Program Files\Anaconda\envs\pulse\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py'"'"'; file='"'"'C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\gijse\AppData\Local\Temp\pip-record-637lz1en\install-record.txt' --single-version-externally-managed --compile --install-headers 'D:\Program Files\Anaconda\envs\pulse\Include\dlib'
cwd: C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib
Complete output (55 lines):
running install
running build
running build_py
package init file 'dlib_init_.py' not found (or not a regular file)
running build_ext
Traceback (most recent call last):
File "C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py", line 120, in get_cmake_version
out = subprocess.check_output(['cmake', '--version'])
File "D:\Program Files\Anaconda\envs\pulse\lib\subprocess.py", line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "D:\Program Files\Anaconda\envs\pulse\lib\subprocess.py", line 489, in run
with Popen(*popenargs, **kwargs) as process:
File "D:\Program Files\Anaconda\envs\pulse\lib\subprocess.py", line 854, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "D:\Program Files\Anaconda\envs\pulse\lib\subprocess.py", line 1307, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py", line 223, in <module>
    setup(
  File "D:\Program Files\Anaconda\envs\pulse\lib\site-packages\setuptools\__init__.py", line 161, in setup
    return distutils.core.setup(**attrs)
  File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\core.py", line 148, in setup
    dist.run_commands()
  File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\dist.py", line 966, in run_commands
    self.run_command(cmd)
  File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\dist.py", line 985, in run_command
    cmd_obj.run()
  File "D:\Program Files\Anaconda\envs\pulse\lib\site-packages\setuptools\command\install.py", line 61, in run
    return orig.install.run(self)
  File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\command\install.py", line 545, in run
    self.run_command('build')
  File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\dist.py", line 985, in run_command
    cmd_obj.run()
  File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\command\build.py", line 135, in run
    self.run_command(cmd_name)
  File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "D:\Program Files\Anaconda\envs\pulse\lib\distutils\dist.py", line 985, in run_command
    cmd_obj.run()
  File "C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py", line 129, in run
    cmake_version = self.get_cmake_version()
  File "C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py", line 122, in get_cmake_version
    raise RuntimeError("\n*******************************************************************\n" +
RuntimeError:
*******************************************************************
 CMake must be installed to build the following extensions: dlib
*******************************************************************

----------------------------------------

ERROR: Command errored out with exit status 1: 'D:\Program Files\Anaconda\envs\pulse\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py'"'"'; file='"'"'C:\Users\gijse\AppData\Local\Temp\pip-install-bh9nnp7l\dlib\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\gijse\AppData\Local\Temp\pip-record-637lz1en\install-record.txt' --single-version-externally-managed --compile --install-headers 'D:\Program Files\Anaconda\envs\pulse\Include\dlib' Check the logs for full command output.

What time does the quota reset?

I have been trying to test a single photo for a couple of days now, but the quota is always exceeded.

What is the time (+timezone) the daily quota resets? This way I can try again right at that moment.

Best regards

Error in loss

I'm running on a single image and getting the following error.

Traceback (most recent call last):
  File "run.py", line 79, in <module>
    for j,(HR,LR) in enumerate(model(ref_im,**kwargs)):
  File "/home/ubuntu/pulse/PULSE.py", line 149, in forward
    loss, loss_dict = loss_builder(latent_in, gen_im)
  File "/home/ubuntu/miniconda3/envs/pulse/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/pulse/loss.py", line 54, in forward
    tmp_loss = loss_fun_dict[loss_type](**var_dict)
  File "/home/ubuntu/pulse/loss.py", line 23, in _loss_l2
    return ((gen_im_lr - ref_im).pow(2).mean((1, 2, 3)).clamp(min=self.eps).sum())
RuntimeError: The size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 1

I printed the variables gen_im_lr & ref_im and got the following:

torch.Size([1, 3, 32, 32]) torch.Size([1, 4, 32, 32])

About the CelebA-HQ dataset used in the experiment

Did the experiment results are reported on the whole CelebA-HQ dataset? It consists of 30,000 images and will take a long time, I wonder whether you use a portation of the dataset in the experiment just like "Image Processing Using Multi-Code GAN Prior" (300 images for testing)

CUDA out of memory

when i run align_face.py to downscaled the pics.when i deal 2 pics in run.py, i got the result:
Loading Synthesis Network
Optimizing
Traceback (most recent call last):
File "./run.py", line 84, in
for j,(HR,LR) in enumerate(model(ref_im,**kwargs)):
File "/usr/local/src/pulse/PULSE.py", line 149, in forward
gen_im = (self.synthesis(latent_in, noise)+1)/2
File "/root/anaconda3/envs/pulse/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(input, **kwargs)
File "/usr/local/src/pulse/stylegan.py", line 408, in forward
x = m(x, dlatents_in[:, 2
i:2i+2], noise_in[2i:2*i+2])
File "/root/anaconda3/envs/pulse/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(*input, **kwargs)
File "/usr/local/src/pulse/stylegan.py", line 337, in forward
x = self.epi1(x, dlatents_in_range[:, 0], noise_in_range[0])
File "/root/anaconda3/envs/pulse/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(*input, **kwargs)
File "/usr/local/src/pulse/stylegan.py", line 277, in forward
x = self.top_epi(x)
File "/root/anaconda3/envs/pulse/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/pulse/lib/python3.8/site-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/root/anaconda3/envs/pulse/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/pulse/lib/python3.8/site-packages/torch/nn/modules/instancenorm.py", line 47, in forward
return F.instance_norm(
File "/root/anaconda3/envs/pulse/lib/python3.8/site-packages/torch/nn/functional.py", line 1943, in instance_norm
return torch.instance_norm(
RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 1024.00 MiB total capacity; 400.35 MiB already allocated; 2.86 MiB free; 462.00 MiB reserved in total by PyTorch)

my gpu memory is 1GB,i change the steps to 10, and the error is still,how to resolve it?ths

CUDA Out of memory with 2 GB of VRAM

If I try to use the pre-trained model I get this error

RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 1.95 GiB total capacity; 1.19 GiB already allocated; 57.31 MiB free; 1.25 GiB reserved in total by PyTorch)

My graphic card is a NVIDIA GeForce GTX 750 Ti with 2 GB of VRAM: is there a configuration that allows to use the model with this amount of VRAM ?

RuntimeError: CUDA out of memory.

It usually works the first time, but then every time after it gives me this error:

  File "run.py", line 79, in <module>
    for j,(HR,LR) in enumerate(model(ref_im,**kwargs)):
  File "/pulse/PULSE.py", line 149, in forward
    gen_im = (self.synthesis(latent_in, noise)+1)/2
  File "/anaconda3/envs/pulse/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/pulse/stylegan.py", line 408, in forward
    x = m(x, dlatents_in[:, 2*i:2*i+2], noise_in[2*i:2*i+2])
  File "/anaconda3/envs/pulse/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/pulse/stylegan.py", line 339, in forward
    x = self.epi2(x, dlatents_in_range[:, 1], noise_in_range[1])
  File "/anaconda3/envs/pulse/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/pulse/stylegan.py", line 279, in forward
    x = self.style_mod(x, dlatents_in_slice)
  File "/anaconda3/envs/pulse/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/pulse/stylegan.py", line 135, in forward
    x = x * (style[:, 0] + 1.) + style[:, 1]
RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 2.94 GiB total capacity; 1.41 GiB already allocated; 65.19 MiB free; 1.55 GiB reserved in total by PyTorch)

even with the same picture that worked before.

nvidia-smi doesn't show any usage from python or pytorch.

WebApp again?

Unfortunately I don't have a GPU. Thus, I would like to suggest to bring back the web app for people to directly try pulse. Any chances?

What exactly is the tile_latent argument supposed to do

I'm not very familiar with the styleGAN arch itself, i was just trying this out for fun. of all the cli arguments, the only one that didn't have a default value was tile_latent, looking the rest of the code, it's clear this is supposed to be a boolean value, and the description stated, "Whether to forcibly tile the same latent 18 times". I'm not sure what this signifies, could someone briefly explain the effect of setting this to either true or false

what's the difference between pulse and stylegan encoder?

  1. Stylegan encoder predicts face similar to input face, while pulse predicts face whose LR image close to input face.
  2. Just like stylegan encoder, pulse can only predict face based on pretrained GAN like stylegan. If you want to predict something else, you must train a GAN first. In this aspect, it is not general fit SR model.

Thanks for correcting me if I miss anything.

Google drive exceeded quota

When running 'run.py' for the first time it gives a "Google Drive quota exceeded" message when trying to download the model. The instructions describe how to setup alternative links for these. Is there an alternative url/location I can use or any other way to get this data?

output not similar to the input

with several pics from FFHQ dataset,i used align_face.py to downsample 32X, and runned run.py, the output pics were not the same person as inputs,why?
00000
00000_0
00000_0

[documentation] Instruction for Windows users.

It turns out, as long as you have GTX 970 or higher, prerequisites installation is quite simple

  1. Fresh nVidia drivers.
  2. Python
  3. Visual Studio with c++ workload and cmake, needed for dlib. Possibly, only buildtools are needed, I didn't check that. There may be precompiled dlib, I also didn't check.
  4. Precompiled pytorch and pytorchvision
  5. Pillow, numpy, scipy and dlib.

Looks like there is no need to install CUDA SDK to use precompiled pytorch on Windows.

So, from clean slate:

  1. Install chocolatey, used to install Cmake, Visual Studio and Python

from admin cmd

@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "[System.Net.ServicePointManager]::SecurityProtocol = 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"

or from admin PowerShell

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
  1. Reopen admin cmd/Powershell and run
cinst -y visualstudio2019community visualstudio2017-workload-nativedesktop

to install Visual Studio with C++ compiler. Then reboot and, again, from admin cmd/Powershell

cinst -y python
cinst -y cmake --installargs 'ADD_CMAKE_TO_PATH=System'
  1. Finally, install all python libraries. I used pytorch 1.5.0, and it supports only GTX 970 and newer. Maybe older versions supports your hardware.

from non-admin cmd/PowerShell

pip install numpy
pip install scipy
pip install https://download.pytorch.org/whl/cu102/torch-1.5.0-cp38-cp38-win_amd64.whl
pip install https://download.pytorch.org/whl/cu102/torchvision-0.6.0-cp38-cp38-win_amd64.whl
pip install pillow
pip install dlib

Everything's ready for use, from Data section of the Readme.

Warning, though, for some reason dlib module align_face.py didn't work well with pngs and bmps. So jpg realpics it is.

Training and Inferencing documentation

@adamian98 @georgzoeller
Hi there,

  • it's would be great to know how to train pulse from scratch
  • Also how to use the trained model to inference
  • Note that the download links have reached a daily quota, so you might want to upload the files into a multiple services, example: gdrive, onedrive, dropbox, etc..

.yml file for Ubuntu users to create environment?

My computer runs on Ubuntu 18.04 with CUDA 10.2
I've already removed some hash tag for packages.
However, there are still a lot of package conflicts.

I wonder there is a .yml file for Ubuntu users or any other install methods?

HR result is not similar to source image.

Thank you for your impressive work and for sharing the code.
I put two png images into input folder:

size 256x256 32x32
input 2020-06-02 14 36 43 selfie
result

Moreover if I downscale enhanced image back to 32x32 I got very different thumbnail from the original one:
Screenshot 2020-06-02 at 2 50 51 PM

Could you assist me what am I doing wrong and how to reproduce results from paper?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.