mahmoudnafifi / histogan Goto Github PK
View Code? Open in Web Editor NEWReference code for the paper HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms (CVPR 2021).
License: MIT License
Reference code for the paper HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms (CVPR 2021).
License: MIT License
i want to train, but i got this message ' ValueError: num_samples should be a positive integer value, but got num_sample=0, can someone know how to solve this problem
Hi! I tried to use the pre-trained "Universal_rehistoGAN_v2" model to train on my data with the same picture resolution to improve the quality of the neural network, but there was a problem "self.GAN.load_state_dict":
Error(s) in loading state_dict for recoloringGAN:
size mismatch for ED.encoder_blocks.0.conv_res.weight: copying a param with shape torch.Size([36, 18, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 8, 1, 1]).
size mismatch for ED.encoder_blocks.0.conv_res.bias: copying a param with shape torch.Size([36]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for ED.encoder_blocks.0.net.0.weight: copying a param with shape torch.Size([36, 18, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 8, 3, 3]).
size mismatch for ED.encoder_blocks.0.net.0.bias: copying a param with shape torch.Size([36]) from checkpoint, the shape in current model is torch.Size([16]).
...
I tried adding "strict=False" as described here (pytorch/pytorch#40859), but to no avail. I suspect that the model I loaded and the target model is not identical, so the error raise to inform about mismatches of size, layers.
How to get around this problem? Thanks in advance for the answer
Hi. Great work. For ReHistoGAN, it has been mentioned in paper that the weights of head are initialized based on a previously trained HistoGAN. Have you tried training from the scratch and any observations (without skip connection and fine-tuning results are in paper)? And have you used the same trained HistoGAN head (trained on face images) for universal HistoGAN? Thanks.
Hello,
The input image seems have no noralized to [0,1] in histoGAN.py, but RGBuvHistBlock clamp the input into [0,1].
Thank you for this amazing work.
I was trying to integrate the histogram loss into my own network, I copied the class as it is from the Colab notebook but I get the below error:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Do you have any idea how can I sort this out. Thank You
Hi,
I want to test rehistoGAN, I have pre-downloaded the model, but when I run it with the statement left in your README file,
python histoGAN.py --name Faces_histoGAN --generate True --target_his ./target_images/2.jpg --gpu 0
the image is displayed without input. I try to input the image myself, but it fails. What should I do? I look forward to your answer.
Thanks.
ps(What is "--target_his ./target_images/2.jpg".Is it a folder?)
Hi! Unable to do correct upsampling in rehistoGAN.py on my dataset. That is, if you take the standard 256x256 network output, the colours change normally. But if you add --upsampling_output option, the picture becomes faded, i.e. it gets predominantly white. I changed the dimensionality to be a multiple of two, but it didn't help either.
Running it like this:
python rehistoGAN.py --name Universal_rehistoGAN_v0 --generate True --input_image my_inp_img.png --target_hist my_goal_img.png --gpu 0 --network_capacity 18 -upsampling_output True
What could be the problem? Thanks in advance for the answer
P.S. The pictures show a person, some buildings, trees and bushes, i.e. the image was taken inside the city
Hi I tried to train the network with FFHQ data set and I faced this error. I searched the error and found that some of layers' dimensions got mismatched. Have you guys met this problem before? Thanks
Hi,
for l in range(L):
I = torch.t(torch.reshape(X[l], (3, -1)))
II = torch.pow(I, 2)
if self.intensity_scale:
Iy = torch.unsqueeze(torch.sqrt(II[:, 0] + II[:, 1] + II[:, 2]), dim=1)
else:
Iy = 1
I found a bug here:
torch.sqrt(II[:, 0] + II[:, 1] + II[:, 2]), dim=1)
sqrt is not derivable at point 0
Therefore, it is better to change to this:
torch.sqrt(II[:, 0] + II[:, 1] + II[:, 2]+ EPS), dim=1)
Hi, I am not able to download the weights of Universal Histogan. Can you please check the link again? Thanks.
Hi, I implemented ReHistoGAN according to your paper, maybe some details such as when to do the unsample and downsample operation may be different, but the whole network is consistent with paper. I'm confused with training ReHistoGAN, could you tell me more details of traning, did you train alternatively encoder-decoer and discriminator? did you use path length penalty in ReHistoGAN? did you adopted moving average in ReHistoGAN? if so, could you tell more about this? Thank you.
Hello!
I want to test rehistoGAN using the pretrained pth file you provided. But where should I put the pt file? And where to state the file path for model loading? I can't figure it out from the code.
Thanks a lot!
Hi,
I would like to get the output image whose resolution is equal to the input image's resolution, but I need the executable BGU.exe, How would I get the BGU.exe program?
Thanks in advance for any help.
Hi,
I am checking the parameters for RGBuvHistBlock as the gradients for the parameters of the block are being set here -
Line 741 in dc543f8
What I am seeing is the block doesn't have parameters. Could anyone confirm if this is the case? And if yes, how is the block getting trained without having parameters and hence, no gradients?
I just want to use this block and not the whole GAN architecture for my own case.
Thanks!
Can you tell me your training configuration?Such how many GPUs you need for training and how long it takes for training?
Hi, thanks for your great work, it makes sense very much.
When I was going to apply the Color Histogram Loss in my work, I found that this loss occupies a lot of CUDA memory, is this normal?
For example, the model I used to train can be trained with batchsize = 16 and GPU_num = 4, but when I use this loss, the model can only be trained with batchsize = 8 and GPU_num = 8.
😭
Can you share the generator and discriminator loss function of the rehistogan model. Are you also giving color histograms as conditions to discriminator besides real images?
Hi,
I want to run the model on custom faces image.
I could not find a parameter that takes in input images directory, It always runs the model on fixed input images.
Thanks
Hi, I'm a student and I'm trying to implement your GAN network on Matlab. I'm curious that is this possible to be done?
And if it is, is it possible for you to provide the codes for the GAN layers?
Thanks!
I downloaded your Universal RehistoGAN models and reproduce your result but I got weird output. Is this model correct?
https://ln4.sync.com/dl/7d31a84c0/9na2sp3y-dt4n55eq-3k84ddvs-zd37eeh9
python rehistoGAN.py --name models/Universal_rehistoGAN_v2/model_0.pt --generate Tr
ue --input_image ./input_images/1.jpg --target_hist ./target_images/ --upsampling_out
put True --network_capacity 18 --gpu 0
Hi Mahmoud ,
i was trying to train this own cloths dataset as my requirements was to change color of the clothes and produce random output which should be conditional on my input image and preserve my original image structure .
Can you please let me know if the above can be achieved using your model ,
also where can i check the generate samples ,I could check it only give histogram output
Can you share the code please? I can't find it from the homepage. thank you!
Hi, Mahmoud! I need to train the network for a specific task - there is one scene, but filmed from two angles. The problem is that there are cases where from one angle the colors in the scene are more saturated than from the other. Does it make sense to practice with reHistoGAN (Universal) or would it be more practical to look at white balance or exposure correction? Thanks in advance for the answer!
Hi,
I am not able to replicate the results correctly.
How do I get the final output?
For example,
I used the following code
python histoGAN.py --name Faces_histoGAN --generate True --target_his ./target_images/1.jpg --gpu 0
And got this output which seems to be noise
How can I use this to generate face samples as shown in the repo?
Thank you
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.