GithubHelp home page GithubHelp logo

dcpdn's People

Contributors

hezhangsprinter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dcpdn's Issues

Do Testing and Training (Fine-tuning) use the same model?

1.Training (Fine-tuning)
python train.py --dataroot ./facades/train512 --valDataroot ./facades/test512 --exp ./checkpoints_new --netG ./demo_model/netG_epoch_8.pth

2.Testing
python demo.py --dataroot ./your_dataroot --valDataroot ./your_dataroot --netG ./pre_trained/netG_epoch_9.pth

Hi, Phd Zhang. I want to test your program,but there is no "netG_epoch_9.pth" model. In your github website, it only provides "netG_epoch_8.pth" model in google cloud. Please how to get "netG_epoch_9.pth"? Thank you for your help.

What is the exact value of sizePatchGAN?

Hi!
I notice that when you define the netD in dehaze22.py, your annotation at line 150 says sizePatchGAN=30. However, in train.py, you firstly set sizePatchGAN = 30 at line 168 but later at line 258 you set sizePatchGAN=62. I cannot get your point of doing these manipulations, and I wonder which is the one that you used in your paper?
Thank you in advance!

Absolute value of transmission map

  1. Why absolute value of transmission map is taken?
  2. If output value should be between 0 to 1, can't it will be helpful to use Sigmoid instead of TanH.
  3. 10**-10 value is added. Is it any specific reason to add this value only not any other?

The result images seem not to be enhanced.

Hi @hezhangsprinter ,

I really appreciate this work and start trying to test your method with testing images (nature) and pre-trained model (netG_epoch_8.pth) you provide.
I change
netG.load_state_dict(torch.load(opt.netG))
to
netG.load_state_dict(torch.load(opt.netG),strict=False)
as #10 said to solve the following error,
KeyError: 'unexpected key "tran_dense.dense_block1.denselayer1.norm.1.weight" in state_dict'

I succeeded running demo.py, but it seems that the outputs were not enhanced, as the following samples shows,

10_dcpcn

My environment:
Ubuntu 16.04
Pytorch v0.3.1 with cuda 9.0
Python v3.5.2

BTW, would you like to remove the unused code. It really costs me some time to understand them. 😃

Thanks a lot!

create_train.py

When I first ran "python create_train.py " and I got the "train" folder,it contains some ".h5" files.And I wanted to test these data by using the "python demo.py --dataroot ./facades/test512 --valDataroot ./facades/train --netG ./checkpoints_new/netG_epoch_8.pth" . But I got the following error:
Traceback (most recent call last):
File "demo.py", line 217, in
for i, data in enumerate(valDataloader, 0):
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 188, in next
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 188, in
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/home/lijunnian/Desktop/image dehaze/DCPDN-master/datasets/pix2pix_val.py", line 64, in getitem
ato_map=f['ato'][:]
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/usr/local/lib/python3.5/dist-packages/h5py/_hl/group.py", line 177, in getitem
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5o.pyx", line 190, in h5py.h5o.open
KeyError: "Unable to open object (object 'ato' doesn't exist)"

My OS is ubuntu 16.04
torch 0.3.0
CUDA 9.0

Look forward to your reply!

train.py runtime error

Hello, when I ran "train.py", I encountered an error in line 206: “StopIteration”. Indicates that "data_val = val_iter. next ()" cannot be iterated. Is there any missing document? After debugging, it feels like "valDataloader" is empty. Thank you very much for your time.

demo execution error

Hi, I stopped at the error message while executing demo.py

(M.S.Song-pytorch) ubuntu@ubuntu:~/Drive_B/A.H.Cha_B/DCPDN$ python demo.py --dataroot ./facades/nat_new4 --valDataroot ./facades/nat_new4 --netG ./demo_model/netG_epoch_8.pth Namespace(annealEvery=400, annealStart=0, batchSize=1, beta1=0.5, dataroot='./facades/nat_new4', dataset='pix2pix', display=5, evalIter=500, exp='sample', imageSize=1024, inputChannelSize=3, lambdaGAN=0.01, lambdaIMG=1, lrD=0.0002, lrG=0.0002, mode='B2A', ndf=64, netD='', netG='./demo_model/netG_epoch_8.pth', ngf=64, niter=400, originalSize=1024, outputChannelSize=3, poolSize=50, valBatchSize=1, valDataroot='./facades/nat_new4', wd=0.0, workers=1) Random Seed: 7412 /home/ubuntu/anaconda3/envs/M.S.Song-pytorch/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py:191: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. Traceback (most recent call last): File "demo.py", line 134, in <module> netG.load_state_dict(torch.load(opt.netG)) File "/home/ubuntu/anaconda3/envs/M.S.Song-pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 522, in load_state_dict .format(name)) KeyError: 'unexpected key "tran_dense.dense_block1.denselayer1.norm.1.weight" in state_dict'

I put in 36 of h5 file in nat_new4 in facades, and put in 400 of h5 file in val512 in facades.

my system info:
Linux
python 3.6.6
pytorch 0.3.1 cuda80
conda 4.3.30

can i solve this problem?
KeyError: 'unexpected key "tran_dense.dense_block1.denselayer1.norm.1.weight" in state_dict'
thanks for reading

module name can't contain \".\"

I attempted to run python demo.py --dataroot ./facades/nat_new4 --valDataroot ./facades/nat_new4 --netG ./demo_model/netG_epoch_8.pth but error.

Error message is

Traceback (most recent call last): File "demo.py", line 128, in <module> netG = net.dehaze(inputChannelSize, outputChannelSize, ngf) File "/home/ito/src/DCPDN/dehaze22.py", line 537, in __init__ self.tran_est=G(input_nc=3,output_nc=3, nf=64) File "/home/ito/src/DCPDN/dehaze22.py", line 88, in __init__ layer2 = blockUNet(nf, nf*2, name, transposed=False, bn=True, relu=False, dropout=False) File "/home/ito/src/DCPDN/dehaze22.py", line 56, in blockUNet block.add_module('%s.leakyrelu' % name, nn.LeakyReLU(0.2, inplace=True)) File "/home/ito/virtualenv/pytorch/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 169, in add_module raise KeyError("module name can't contain \".\""). KeyError: 'module name can\'t contain "."'

My torch version is 0.4.0 , and torchvisiion version is 0.2.1.

I think, after version 0.2.1 '.'s are no longer allowed in module keys.
Please tell me your torchvision version.

issue about the GAN usage in paper

Hi, I am a senior high student on Chengdu China, and I do research in GAN. Today I read your paper densely connected pyramid dehazing network.
But after reading it, I still don't know the GAN's usage and how it works. In my opinion, the GAN in your paer is like a loss function to optimize the main network, please help me to understand it.
Best wishes,
Hans Yang
CDEFLS

train.py run ValueError: Expected more than 1 value per channel when training, got input size [1, 64, 1, 1]

Traceback (most recent call last): File "train.py", line 286, in <module> x_hat, tran_hat, atp_hat, dehaze21 = netG(input) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/gavin/MyProject/python/image_inpainting/De-haze/DCPDN/models/dehaze22.py", line 696, in forward atp= self.atp_est(x) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/gavin/MyProject/python/image_inpainting/De-haze/DCPDN/models/dehaze22.py", line 473, in forward out7 = self.layer7(out6) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/modules/batchnorm.py", line 66, in forward exponential_average_factor, self.eps) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/functional.py", line 1251, in batch_norm raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size)) ValueError: Expected more than 1 value per channel when training, got input size [1, 64, 1, 1] Exception ignored in: <bound method _DataLoaderIter.__del__ of <torch.utils.data.dataloader._DataLoaderIter object at 0x7f0b2d4afdd8>> Traceback (most recent call last): File "/home/gavin/.local/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 399, in __del__ File "/home/gavin/.local/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 378, in _shutdown_workers File "/usr/lib/python3.5/multiprocessing/queues.py", line 345, in get File "<frozen importlib._bootstrap>", line 969, in _find_and_load File "<frozen importlib._bootstrap>", line 954, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 887, in _find_spec TypeError: 'NoneType' object is not iterable
Is anybody experiencing this problem? batchSize was set to 1.

Edge loss

Hi,

Thanks for sharing. I read your paper, the edge-preserving loss contains l2 loss, two-directional gradient loss, and feature edge loss.
However, in your code, it seems like you used l1 loss, am I right?

Best

train.py 在python3.5,pytorch0.3.1运行RuntimeError:

THCudaCheck FAIL file=/pytorch/torch/lib/THC/generic/THCStorage.cu line=58 error=2 : out of memory
Traceback (most recent call last):
File "train.py", line 275, in
x_hat, tran_hat, atp_hat, dehaze21 = netG(input)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 325, in call
result = self.forward(*input, **kwargs)
File "/media/bilibili/D4FA828F299D817A/dujuan/DCPDN-master/models/dehaze22.py", line 728, in forward
dehaze=self.relu((self.refine1(dehaze)))
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 325, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/conv.py", line 277, in forward
self.padding, self.dilation, self.groups)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py", line 90, in conv2d
return f(input, weight, bias)
RuntimeError: cuda runtime error (2) : out of memory at /pytorch/torch/lib/THC/generic/THCStorage.cu:58

The result is very different from that in the paper

I downloaded the pre_trained model and the nature image, and run the command python demo.py --dataroot ./facades/nat_new4 --valDataroot ./facades/nat_new4 --netG ./demo_model/netG_epoch_8.pth, but the obtained results in result_cvpr18 are different from those in the paper. There is obvious haze in the results. If you have time, please tell me how can I solve the problem?

train.py code error

RuntimeError: expand(torch.cuda.FloatTensor{[1, 3, 512, 512]}, size=[3, 512, 512]): the number of sizes provided (3) must be greater or equal to the number of dimensions in the tensor (4)
change comment line
if ganIterations % opt.evalIter == 0: val_batch_output = torch.FloatTensor(val_input.size()).fill_(0) for idx in range(val_input.size(0)): single_img = val_input[idx,:,:,:].unsqueeze(0) val_inputv = Variable(single_img, volatile=True) x_hat_val, x_hat_val2, x_hat_val3, dehaze21 = netG(val_inputv) val_batch_output[idx,:,:,:].copy_(dehaze21.data.squeeze(0))################add .squeeze(0) vutils.save_image(val_batch_output, '%s/generated_epoch_%08d_iter%08d.png' % \ (opt.exp, epoch, ganIterations), normalize=False, scale_each=False)

about pytorch's version

after changing the version of pytorch to 0.3.1
the following problem occurred:

Traceback (most recent call last):
  File "demo.py", line 128, in <module>
    netG = net.dehaze(inputChannelSize, outputChannelSize, ngf)
  File "/data2/zhangzhengxi4047/dcpdn.pytorch/DCPDN/dehaze22.py", line 540, in __init__
    self.tran_dense=Dense()
  File "/data2/zhangzhengxi4047/dcpdn.pytorch/DCPDN/dehaze22.py", line 411, in __init__
    haze_class = models.densenet121(pretrained=True)
  File "build/bdist.linux-x86_64/egg/torchvision/models/densenet.py", line 27, in densenet121
  File "build/bdist.linux-x86_64/egg/torchvision/models/densenet.py", line 213, in __init__
AttributeError: 'module' object has no attribute 'kaiming_normal_'

what should i do?

train.py error

python3 train.py
Namespace(annealEvery=400, annealStart=0, batchSize=1, beta1=0.5, dataroot='/home/ol/DCPDN-master/facades/train512 ', dataset='pix2pix', display=5, evalIter=50, exp='/home/ol/DCPDN-master/checkpoints_new ', imageSize=256, inputChannelSize=3, lambdaGAN=0.35, lambdaIMG=1, lrD=0.0002, lrG=0.0002, mode='B2A', ndf=64, netD='', netG='/home/ol/DCPDN-master/demo_model/netG_epoch_8.pth', ngf=64, niter=400, originalSize=286, outputChannelSize=3, poolSize=50, valBatchSize=150, valDataroot='/home/ol/DCPDN-master/facades/test512 ', wd=0.0, workers=1)
Random Seed: 8382
/usr/local/lib/python3.5/dist-packages/torchvision/transforms/transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
Traceback (most recent call last):
File "train.py", line 123, in
netG.load_state_dict(torch.load(opt.netG))
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 490, in load_state_dict
.format(name))
KeyError: 'unexpected key "tran_dense.dense_block1.denselayer1.norm.1.weight" in state_dict'

train method problem

Hello, I'm a beginner of torch, I read your paper and run your code.
During the training part,you used a stage-wise learning in your paper.But in your code, it just fine-tune the whole network ,how can I realize the stage by stage learning method? Could you please be more specific?
Thanks a lot in advance!!

RuntimeError: DataLoader worker (pid 19597) is killed by signal: Killed

when I run train.py , I got en error. Is my memory too little?My memory is 16G and Gpu memory is 8G
Details:
Traceback (most recent call last):
File "train.py", line 210, in
val_target_cpu, val_input_cpu = val_target_cpu.float().cuda(), val_input_cpu.float().cuda()
File "/home/littlemonster/anaconda3/envs/pic/lib/python2.7/site-packages/torch/_utils.py", line 69, in cuda
return new_type(self.size()).copy
(self, async)
File "/home/littlemonster/anaconda3/envs/pic/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 175, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 19597) is killed by signal: Killed.

Look forward to your rely!

train code with my own data

Hello, I am reading your paper and try to run train code with my own data. but I don't know how to create haze image from ground truth, trans map and ato map. can you help me?

what is the "A" in create_train.py?

Hi! I am going through your code and trying to generate my own training data, but now meeting a problem in create_train.py.
In line 119, there is a undefined parameter "A", which makes me a little confused.
rep_atmosphere = np.tile(np.reshape(A, [1, 1, 3]), [m, n, 1])
Hope you can help me out. Thank you!

Can DCPDN only handle fixed-size images?

Excuse me,I can only successfully run the program on fixed-size test images, such as 512×512 or 1024×1024. When I tested DCPDN on a 574× 829 image, I got an error:
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 574 and 576 in dimension 2 at c:\anaconda2\conda-bld\pytorch_1519492996300\work\torch\lib\thc\generic/THCTensorMath.cu:111

How can I solve this problem? I am looking forward to and thank you for your reply!

test results on Epoch_8 are not ideal

Hello, my dehazing results obtained with the netG_epoch_8 test are not ideal, and even obvious problems have arisen. Then I analyzed the atmospheric light value A according to the output, mainly because the predicted A is generally too small, only 0.05. So, the final output graph is too bright, what is the cause of this?

How to use generate_testsample.py?

Hello, I am reading your paper and try to test your code with my own test images, but I do not know how to use generate_testsample.py?...I run the code but get the same four images (haze_image、gt_trans_map、gt_ato_map and GT are all same).

train error

netD = net.D(inputChannelSize + outputChannelSize, ndf)
这行代码里面,net.D在dehaze.py里面没有这个类的定义
请问是怎么解决的

Need For Help

Does anybody run the code successfully? I am trying to run it, but I encountere a lot of errors. Can anyboy help me?
有人弄这个弄成功了的么?可以分享一下经验么。
联系方式:QQ-522246447,WeChat-18666885775

demo error

你好,我试了一下demo,就是
python demo.py --dataroot ./facades/nat_new4 --valDataroot ./facades/nat_new4 --netG ./demo_model/netG_epoch_8.pth
但是会出错。
Random Seed: 3661
/usr/local/lib/python2.7/dist-packages/torchvision-0.2.1-py2.7.egg/torchvision/transforms/transforms.py:191: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
Traceback (most recent call last):
File "demo.py", line 128, in
netG = net.dehaze(inputChannelSize, outputChannelSize, ngf)
File "/home/cdelite/DCPDN/DCPDN/dehaze22.py", line 537, in init
self.tran_est=G(input_nc=3,output_nc=3, nf=64)
File "/home/cdelite/DCPDN/DCPDN/dehaze22.py", line 88, in init
layer2 = blockUNet(nf, nf*2, name, transposed=False, bn=True, relu=False, dropout=False)
File "/home/cdelite/DCPDN/DCPDN/dehaze22.py", line 56, in blockUNet
block.add_module('%s.leakyrelu' % name, nn.LeakyReLU(0.2, inplace=True))
File "/usr/local/lib/python2.7/dist-packages/torch-0.4.0-py2.7-linux-x86_64.egg/torch/nn/modules/module.py", line 169, in add_module
raise KeyError("module name can't contain "."")
KeyError: 'module name can't contain "."'
请教一下是什么原因?

cPickle.UnpicklingError: invalid load key, '<'. demo/train execution error

Dear,
When I execute demo.py, I got stuck at netG.load_state_dict(model)

My settings:
Linux
Python 2.7
NVIDIA GPU + CUDA CuDNN (CUDA 9.0)
PyTorch 0.3.1

When I execute Demo.py
I got the output showing below:

Namespace(annealEvery=400, annealStart=0, batchSize=1, beta1=0.5, dataroot='./facades/nat_new4', dataset='pix2pix', display=5, evalIter=500, exp='sample', imageSize=1024, inputChannelSize=3, lambdaGAN=0.01, lambdaIMG=1, lrD=0.0002, lrG=0.0002, mode='B2A', ndf=64, netD='', netG='./demo_model/netG_epoch_8.pth', ngf=64, niter=400, originalSize=1024, outputChannelSize=3, poolSize=50, valBatchSize=1, valDataroot='./facades/nat_new4', wd=0.0, workers=1)
Random Seed:  7984
/home/snf4/anaconda3/envs/py27_env/lib/python2.7/site-packages/torchvision/transforms/transforms.py:208: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
  "please use transforms.Resize instead.")
Traceback (most recent call last):
  File "demo.py", line 134, in <module>
    netG.load_state_dict(torch.load(opt.netG))
  File "/home/snf4/anaconda3/envs/py27_env/lib/python2.7/site-packages/torch/serialization.py", line 267, in load
    return _load(f, map_location, pickle_module)
  File "/home/snf4/anaconda3/envs/py27_env/lib/python2.7/site-packages/torch/serialization.py", line 410, in _load
    magic_number = pickle_module.load(f)
cPickle.UnpicklingError: invalid load key, '<'.

This error also appeared when executing train.py
I tried to replace

netG = net.dehaze(inputChannelSize, outputChannelSize, ngf)
if opt.netG != '':
      netG.load_state_dict(torch.load(opt.netG))

with the code below according to this UnicodeDecodeError
However, it doesn't work under this circumstance.

from functools import partial
import pickle
pickle.load = partial(pickle.load, encoding="latin1")
pickle.Unpickler = partial(pickle.Unpickler, encoding="latin1")
model = torch.load(opt.netG, map_location=lambda storage, loc: storage, pickle_module=pickle)
netG = net.dehaze(inputChannelSize, outputChannelSize, ngf)

if opt.netG != '':
      netG.load_state_dict(model)

Do you have any advice? Thanks in advance!

create_train.py error

find some error here

 a = 1 - 0.5 * uniform(0, 1)


        m = gt_image.shape[0]
        n = gt_image.shape[1]

        rep_atmosphere = np.tile(np.reshape(A, [1, 1, 3]), [m, n, 1])

    ...
        h5f.create_dataset('atom',data=rep_atmosphere)


change to:

        a = 1 - 0.5 * uniform(0, 1)


        m = gt_image.shape[0]
        n = gt_image.shape[1]

        rep_atmosphere = np.tile(np.tile(a, [1, 1, 3]), [m, n, 1])



    ...
        h5f.create_dataset('ato',data=rep_atmosphere)

你好 关于网络结构的问题

您好,
我看了您在CVPR2018发表的两篇工作的代码,我想引用您的网络结构到我去模糊的工作中,可是对您的框架有一点不是很理解。您再两个工作中,都使用了不同尺寸pooling层,尤其去雾的那篇文章,相当于端对端的图像转换过程中,包含了两个 编码-解码 结构,为什么要这么使用呢?
谢谢
宋浩

About unknown error

Hi,
Thanks for your time!

I tried the "training" part command: python train.py --dataroot ./facades/train512 --valDataroot ./facades/test512 --exp ./checkpoints_new --netG ./demo_model/netG_epoch_8.pth
But I got the unknown error, the screenshot of the traceback is shown below:
WechatIMG516
Please notice the last line of the screenshot: RuntimeError: Unknown error -1
What is the reason of this unknown error? Where should I make any change?
I am using pytorch 0.4, and some codes are modified according to the 0.4 version. And I use part of the training set providing on google drive, not all the set, due to the memory of disk.

Really thanks for any help!

About “checkpoint” in the command of training part

Hi!
The command of the training part provided by the author is: python train.py --dataroot ./facades/train512 --valDataroot ./facades/test512 --exp ./checkpoints_new --netG ./demo_model/netG_epoch_8.pth

However, is there any directory called "checkpoints_new", or is this file should be generated by myself? I mean, by which means could I get the file (or directory) "checkpoints_new"?
Really thanks for any help!

train.py problem

Hello, according to your reminder, we have successfully trained the network today. The following problems are encountered:

  1. The training command "python train.py --dataroot ./facades/train512 --valDataroot ./facades/test512 --exp ./checkpoints_new --netG ./demo_model/netG_epoch_8.pth" provided in readme, in which the test data set folder was mistaken by me, and the folder should not be "test512". After decompressing the training data set, the file path should be "./facades/val512".
  2. Training data download slowly, today only 200 downloads. When I trained with 200 data, I reported an error: "Runtime Error: unable to write to file". The query found that there was insufficient shared memory. Later, 100 data were used to train normally. I hope to know your training environment and configuration for reference.
    Tomorrow morning I'll look at the training results and try to use more data to train. I hope I don't make any more mistakes. It's a great honor to communicate with you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.