mlpc-ucsd / vitgan Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
When I run "python train_stylegan2.py configs/gan/stylegan2/c10_style64.gin vitgan --mode=aug_both --aug=diffaug --lbd_r1=0.1 --no_lazy --halflife_k=1000 --penalty=bcr --use_warmup --use_nerf_proj", it appeneded the below message.
Files already downloaded and verified
Files already downloaded and verified
Traceback (most recent call last):
File "/home/m11113013/.conda/envs/ViTGAN/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1723, in _run_ninja_build
env=env)
File "/home/m11113013/.conda/envs/ViTGAN/lib/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "train_stylegan2.py", line 404, in
worker(P)
File "train_stylegan2.py", line 313, in worker
generator, discriminator = get_architecture(P.architecture, image_size, P=P)
File "/home/m11113013/ProjectCode/practice model/ViTGAN-main/models/gan/init.py", line 29, in get_architecture
from models.gan.stylegan2.vit_generator import Generator
File "/home/m11113013/ProjectCode/practice model/ViTGAN-main/models/gan/stylegan2/vit_generator.py", line 12, in
from models.gan.stylegan2.op import FusedLeakyReLU, conv2d_gradfix
File "/home/m11113013/ProjectCode/practice model/ViTGAN-main/models/gan/stylegan2/op/init.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "/home/m11113013/ProjectCode/practice model/ViTGAN-main/models/gan/stylegan2/op/fused_act.py", line 15, in
os.path.join(module_path, "fused_bias_act_kernel.cu"),
File "/home/m11113013/.conda/envs/ViTGAN/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1136, in load
keep_intermediates=keep_intermediates)
File "/home/m11113013/.conda/envs/ViTGAN/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1347, in jit_compile
is_standalone=is_standalone)
File "/home/m11113013/.conda/envs/ViTGAN/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1452, in write_ninja_file_and_build_library
error_prefix=f"Error building extension '{name}'")
File "/home/m11113013/.conda/envs/ViTGAN/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1733, in run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'fused': [1/2] /usr/local/cuda-11.0/bin/nvcc -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/m11113013/.conda/envs/ViTGAN/lib/python3.7/site-packages/torch/include -isystem /home/m11113013/.conda/envs/ViTGAN/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/m11113013/.conda/envs/ViTGAN/lib/python3.7/site-packages/torch/include/TH -isystem /home/m11113013/.conda/envs/ViTGAN/lib/python3.7/site-packages/torch/include/THC -isystem /usr/local/cuda-11.0/include -isystem /home/m11113013/.conda/envs/ViTGAN/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_BFLOAT16_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -std=c++14 -c '/home/m11113013/ProjectCode/practice model/ViTGAN-main/models/gan/stylegan2/op/fused_bias_act_kernel.cu' -o fused_bias_act_kernel.cuda.o
FAILED: fused_bias_act_kernel.cuda.o
/usr/local/cuda-11.0/bin/nvcc -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/m11113013/.conda/envs/ViTGAN/lib/python3.7/site-packages/torch/include -isystem /home/m11113013/.conda/envs/ViTGAN/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/m11113013/.conda/envs/ViTGAN/lib/python3.7/site-packages/torch/include/TH -isystem /home/m11113013/.conda/envs/ViTGAN/lib/python3.7/site-packages/torch/include/THC -isystem /usr/local/cuda-11.0/include -isystem /home/m11113013/.conda/envs/ViTGAN/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -std=c++14 -c '/home/m11113013/ProjectCode/practice model/ViTGAN-main/models/gan/stylegan2/op/fused_bias_act_kernel.cu' -o fused_bias_act_kernel.cuda.o
nvcc fatal : Unsupported gpu architecture 'compute_86'
ninja: build stopped: subcommand failed.
I tried to install the older pytorch version, but it doesn't work.
When I use the command python train_stylegan2.py configs/gan/stylegan2/c10_style64.gin vitgan --mode=aug_both --aug=diffaug --lbd_r1=0.1 --no_lazy --halflife_k=1000 --penalty=bcr --use_warmup --use_nerf_proj --resume checkpoint_log to resume training, it raises an error: `RuntimeError: Error(s) in loading state_dict for Generator: Missing key(s) in state_dict: "layers.0.attn.to_qkv.module.weight", "layers.0.attn.to_out.0.module.weight", "layers.0.ff.net.0.module.weight", "layers.0.ff.net.3.module.weight", "layers.1.attn.to_qkv.module.weight", "layers.1.attn.to_out.0.module.weight", "layers.1.ff.net.0.module.weight", "layers.1.ff.net.3.module.weight", "layers.2.attn.to_qkv.module.weight", "layers.2.attn.to_out.0.module.weight", "layers.2.ff.net.0.module.weight", "layers.2.ff.net.3.module.weight", "layers.3.attn.to_qkv.module.weight", "layers.3.attn.to_out.0.module.weight", "layers.3.ff.net.0.module.weight", "layers.3.ff.net.3.module.weight". Does it mean that the network was not saved completely?
Tank you for your great work! The beta1 and beta2 in this code (0.5, 0.999) is different from the paper (0, 0.99). Which choice is better?
Hello, I would like to ask if you have encountered this problem when configuring the experiment? He reported an error that this module does not exist, I went to check the code and it is in the load function under the fused_act.py file, I have not used torch.utils.cpp_extension.load before, how did you use the 'fused' module
Here. Why is there still a separateq,k
? Shouldn't they be completely tied?
Also, I think the proper way is to use sum
instead of mean
. Also you need to negate the l2 distance.
This is the correct implementation
AB = torch.matmul(qk, qk.transpose(-1, -2))
AA = torch.sum(qk ** 2, -1, keepdim=True)
BB = AA.transpose(-1, -2) # Since query and key are tied.
attn = -(AA - 2 * AB + BB)
attn = attn.mul(self.scale).softmax(-1)
Hello,
Thank you for sharing the source code! Could you please release the configs for LSUN Bedrooms (256x256) and other datasets frin the paper to reproduce the results? Thank you in advance.
May I ask you to share the pretrained weights?
Thanks a lot
QC
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.