Comments (4)
Generating images on the CPU is possible but not with default settings.
Here's how you will need to modify generate.py to enable CPU generation:
diff --git a/generate.py b/generate.py
index f7f9619..0e487ed 100755
--- a/generate.py
+++ b/generate.py
@@ -79,7 +79,7 @@ def generate_images(
"""
print('Loading networks from "%s"...' % network_pkl)
- device = torch.device('cuda')
+ device = torch.device('cpu')
with dnnlib.util.open_url(network_pkl) as f:
G = legacy.load_network_pkl(f)['G_ema'].to(device) # type: ignore
@@ -116,7 +116,7 @@ def generate_images(
for seed_idx, seed in enumerate(seeds):
print('Generating image for seed %d (%d/%d) ...' % (seed, seed_idx, len(seeds)))
z = torch.from_numpy(np.random.RandomState(seed).randn(1, G.z_dim)).to(device)
- img = G(z, label, truncation_psi=truncation_psi, noise_mode=noise_mode)
+ img = G(z, label, truncation_psi=truncation_psi, noise_mode=noise_mode, force_fp32=True)
img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8)
PIL.Image.fromarray(img[0].cpu().numpy(), 'RGB').save(f'{outdir}/seed{seed:04d}.png')
Basically, run it on the CPU device and use only fp32.
Note that there is a marked performance difference between running on the CPU. On my machine, generating an image takes roughly 20 ms whereas the same code on the CPU takes 1500 ms, ie., 75 times slower.
from stylegan2-ada-pytorch.
Basically, run it on the CPU device and use only fp32.
What about projecting images in latent space?
project.py line 98ish
# Synth images from opt_w.
w_noise = torch.randn_like(w_opt) * w_noise_scale
ws = (w_opt + w_noise).repeat([1, G.mapping.num_ws, 1])
- synth_images = G.synthesis(ws, noise_mode='const')
+ synth_images = G.synthesis(ws, noise_mode='const', force_fp32=True)
line 163:
# Load networks.
print('Loading networks from "%s"...' % network_pkl)
- device = torch.device('cuda')
+ device = torch.device('cpu')
from stylegan2-ada-pytorch.
Generating images on the CPU is possible but not with default settings.
Here's how you will need to modify generate.py to enable CPU generation:
diff --git a/generate.py b/generate.py index f7f9619..0e487ed 100755 --- a/generate.py +++ b/generate.py @@ -79,7 +79,7 @@ def generate_images( """ print('Loading networks from "%s"...' % network_pkl) - device = torch.device('cuda') + device = torch.device('cpu') with dnnlib.util.open_url(network_pkl) as f: G = legacy.load_network_pkl(f)['G_ema'].to(device) # type: ignore @@ -116,7 +116,7 @@ def generate_images( for seed_idx, seed in enumerate(seeds): print('Generating image for seed %d (%d/%d) ...' % (seed, seed_idx, len(seeds))) z = torch.from_numpy(np.random.RandomState(seed).randn(1, G.z_dim)).to(device) - img = G(z, label, truncation_psi=truncation_psi, noise_mode=noise_mode) + img = G(z, label, truncation_psi=truncation_psi, noise_mode=noise_mode, force_fp32=True) img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8) PIL.Image.fromarray(img[0].cpu().numpy(), 'RGB').save(f'{outdir}/seed{seed:04d}.png')
Basically, run it on the CPU device and use only fp32.
Note that there is a marked performance difference between running on the CPU. On my machine, generating an image takes roughly 20 ms whereas the same code on the CPU takes 1500 ms, ie., 75 times slower.
Why not add an param to choose generate with GPU or CPU? Can I create a PR?
from stylegan2-ada-pytorch.
Basically, run it on the CPU device and use only fp32.
What about projecting images in latent space?
from stylegan2-ada-pytorch.
Related Issues (20)
- SLERP implementation
- Is StyleGAN2 supposed to project "consistently"? HOT 1
- The image quality generated using the official website's pkl file is poor HOT 2
- Non-square Images? HOT 1
- Error while training on Custom dataset HOT 2
- ImportError: No module named *_plugin... HOT 2
- Questions about ADA
- Assertion Error in : images.ndim==4 is not true for the given dataset in github. HOT 1
- Why does the fakes image generated during the training process have a purple background HOT 1
- RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 1
- Process real images HOT 1
- How to compute the model complexity?
- how to style?
- Error: no such option: --projected_w
- GAN model
- I have one question...
- cudatoolkit, cudnn, pytorch and python compatibility on Windows 11
- How I can derive ResNet model weight from the pre-trained LPIPS weights
- Training FFHQR dataset using FFHQ dataset as input
- Is there any way to control the python workers on Stylegan2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from stylegan2-ada-pytorch.