GithubHelp home page GithubHelp logo

carpedm20 / dcgan-tensorflow Goto Github PK

View Code? Open in Web Editor NEW
7.1K 247.0 2.6K 77.99 MB

A tensorflow implementation of "Deep Convolutional Generative Adversarial Networks"

Home Page: http://carpedm20.github.io/faces/

License: MIT License

Python 25.03% HTML 15.03% JavaScript 49.62% CSS 10.32%
tensorflow dcgan gan generative-model

dcgan-tensorflow's Introduction

DCGAN in Tensorflow

Tensorflow implementation of Deep Convolutional Generative Adversarial Networks which is a stabilize Generative Adversarial Networks. The referenced torch code can be found here.

alt tag

  • Brandon Amos wrote an excellent blog post and image completion code based on this repo.
  • To avoid the fast convergence of D (discriminator) network, G (generator) network is updated twice for each D network update, which differs from original paper.

Online Demo

link

Prerequisites

Usage

First, download dataset with:

$ python download.py mnist celebA

To train a model with downloaded dataset:

$ python main.py --dataset mnist --input_height=28 --output_height=28 --train
$ python main.py --dataset celebA --input_height=108 --train --crop

To test with an existing model:

$ python main.py --dataset mnist --input_height=28 --output_height=28
$ python main.py --dataset celebA --input_height=108 --crop

Or, you can use your own dataset (without central crop) by:

$ mkdir data/DATASET_NAME
... add images to data/DATASET_NAME ...
$ python main.py --dataset DATASET_NAME --train
$ python main.py --dataset DATASET_NAME
$ # example
$ python main.py --dataset=eyes --input_fname_pattern="*_cropped.png" --train

If your dataset is located in a different root directory:

$ python main.py --dataset DATASET_NAME --data_dir DATASET_ROOT_DIR --train
$ python main.py --dataset DATASET_NAME --data_dir DATASET_ROOT_DIR
$ # example
$ python main.py --dataset=eyes --data_dir ../datasets/ --input_fname_pattern="*_cropped.png" --train

Results

result

celebA

After 6th epoch:

result3

After 10th epoch:

result4

Asian face dataset

custom_result1

custom_result1

custom_result2

MNIST

MNIST codes are written by @PhoenixDai.

mnist_result1

mnist_result2

mnist_result3

More results can be found here and here.

Training details

Details of the loss of Discriminator and Generator (with custom dataset not celebA).

d_loss

g_loss

Details of the histogram of true and fake result of discriminator (with custom dataset not celebA).

d_hist

d__hist

Related works

Author

Taehoon Kim / @carpedm20

dcgan-tensorflow's People

Contributors

anguyen8 avatar arisliang avatar asispatra avatar carpedm20 avatar crizcraig avatar duhaime avatar everm1nd avatar genekogan avatar iamaaditya avatar jdenneytwitter avatar johnhany avatar jusjusjus avatar larry-he avatar macaodha avatar memo avatar minhwanoh avatar mustafamustafa avatar ngc92 avatar oritwoen avatar pengwa avatar phoenixdai avatar randl avatar richardjdavies avatar spacemunkay avatar uiur avatar viligabi avatar wasabithumb avatar woctezuma avatar zackarysin avatar zhangqianhui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dcgan-tensorflow's Issues

Depth of conv layers compared to DCGAN paper

Hi Taehoon,

Just to be sure, the number of filters in your implementation is half of those in the original paper. For example, with default gf_dim=64 the first layer of the generator has 512 instead of 1024 as in the original paper.

Is this intentional?

celebA dataset is not accessible

I've been trying to download celebA dataset for a couple of weeks now
"python download.py celebA",
but it always generates the same error:
urllib2.HTTPError: HTTP Error 429: Too Many Requests

Batch Normalization: setting train=False does nothing

The variable train is never used.

    def __call__(self, x, train=True):
        shape = x.get_shape().as_list()

        with tf.variable_scope(self.name) as scope:
            self.gamma = tf.get_variable("gamma", [shape[-1]],
                                initializer=tf.random_normal_initializer(1., 0.02))
            self.beta = tf.get_variable("beta", [shape[-1]],
                                initializer=tf.constant_initializer(0.))

            self.mean, self.variance = tf.nn.moments(x, [0, 1, 2])

            return tf.nn.batch_norm_with_global_normalization(
                x, self.mean, self.variance, self.beta, self.gamma, self.epsilon,
                scale_after_normalization=True)

InvalidArgumentError: Nan in summary histogram

I try to run through the face generator by default parameter using celebA, the project works well until it crashes, and I get the following error:

Traceback (most recent call last):
File "main.py", line 58, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "main.py", line 42, in main
dcgan.train(FLAGS)
File "/home/linzhineng/DCGAN-tensorflow/model.py", line 153, in train
feed_dict={ self.images: batch_images, self.z: batch_z })
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 636, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 708, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 728, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.InvalidArgumentError: Nan in summary histogram for: HistogramSummary_1
[[Node: HistogramSummary_1 = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](HistogramSummary_1/tag, Sigmoid/_39)]]
Caused by op u'HistogramSummary_1',
defined at:
File "main.py", line 58, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "main.py", line 39, in main
dataset_name=FLAGS.dataset, is_crop=FLAGS.is_crop, checkpoint_dir=FLAGS.checkpoint_dir)
File "/home/linzhineng/DCGAN-tensorflow/model.py", line 64, in init self.build_model()
File "/home/linzhineng/DCGAN-tensorflow/model.py", line 85, in build_model
self.d_sum = tf.histogram_summary("d", self.D)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/logging_ops.py", line 125, in histogram_summary
tag=tag, values=values, name=scope)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_logging_ops.py", line 100, in _histogram_summary
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 704, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2260, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1230, in init
self._traceback = _extract_stack()

It occurs in the first epoch. the same error occurs in the project dcgan.torch

any advice? I find that d_loss may suddenly fall fast, the possible reasons is D_network still convergence fast by default parameter.

Trouble getting the mnist example to converge

I have not been able to get the mnist example to converge. Even after days of training (8-CPU system, no GPU) following the standard mnist example in the kit with no change (run as: python main.py --dataset mnist --is_train True), the d_loss/g_loss values just bounced around 1.3/0.7 with little sign of getting any better. I have not observed any error messages so far.

Is it expected that the mnist example will require a lot more computing power than I have in order to make it converge? If there are some hidden problems then what should I look for?

ValueError: could not broadcast input array from shape (64,64,3) into shape (64,64)

Hey,

in this line: https://github.com/carpedm20/DCGAN-tensorflow/blob/master/model.py#L128 im receiving that shape error. I am training on my own data, all images scaled to the same resolution:
data/b_dataset/scaled-1469565542658.jpg JPEG 500x500 500x500+0+0 8-bit sRGB 42.8KB 0.000u 0:00.000

Can you give me a direction from where this might come from? I have tried with a different dataset and it works. Is there some property of the images that i have been missing in this dataset?

Thanks already,
Marcel

Error while downloading the dataset

When I try to download the dataset, I get a KeyError about content-length. What might be the cause of it? Below is error message:

$ python download.py --datasets celebA
Traceback (most recent call last):
  File "download.py", line 121, in <module>
    download_celeb_a('./data')
  File "download.py", line 65, in download_celeb_a
    filepath = download(url, dirpath)
  File "download.py", line 30, in download
    filesize = int(u.headers["Content-Length"])
  File "/home/guel/anaconda/lib/python2.7/rfc822.py", line 393, in __getitem__
    return self.dict[name.lower()]
KeyError: 'content-length'

issue in training

File "main.py", line 44, in main
dcgan.train(FLAGS)
File "/home/tushar/codes/python_codes/DCGAN-tensorflow/model.py", line 131, in train
init_op = tf.global_variables_initializer()
AttributeError: 'module' object has no attribute 'global_variables_initializer'

the training loss is normal?

hi, I am a newbie about tensorflow and I have some confusion about training loss. at the beginning, the d_loss are always bigger than g_loss shown in follow, which is different with the result in the dcgan.torch. it is normal?
Epoch: [ 0] [ 0/3165] time: 2.6742, d_loss: 7.06172514, g_loss: 0.00106246
Epoch: [ 0] [ 1/3165] time: 4.7950, d_loss: 6.95885229, g_loss: 0.00125823
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2487 get requests, put_count=2449 evicted_count=1000 eviction_rate=0.40833 and unsatisfied allocation rate=0.457579
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:256] Raising pool_size_limit_ from 100 to 110
Epoch: [ 0] [ 2/3165] time: 6.1750, d_loss: 7.25680256, g_loss: 0.00104984
Epoch: [ 0] [ 3/3165] time: 7.5547, d_loss: 6.89718437, g_loss: 0.00364288
Epoch: [ 0] [ 4/3165] time: 8.9374, d_loss: 5.45913506, g_loss: 0.02250301
Epoch: [ 0] [ 5/3165] time: 10.3308, d_loss: 7.75127983, g_loss: 0.00146924
Epoch: [ 0] [ 6/3165] time: 11.7197, d_loss: 4.75904989, g_loss: 0.04762752
Epoch: [ 0] [ 7/3165] time: 13.1084, d_loss: 5.15711403, g_loss: 0.03134135
Epoch: [ 0] [ 8/3165] time: 14.4916, d_loss: 5.35569286, g_loss: 0.04407354
Epoch: [ 0] [ 9/3165] time: 15.8697, d_loss: 4.73206615, g_loss: 0.05500766
Epoch: [ 0] [ 10/3165] time: 17.2533, d_loss: 3.20903492, g_loss: 0.34848747
Epoch: [ 0] [ 11/3165] time: 18.6306, d_loss: 8.54726505, g_loss: 0.00069389
Epoch: [ 0] [ 12/3165] time: 20.0077, d_loss: 1.97646499, g_loss: 2.43928814
Epoch: [ 0] [ 13/3165] time: 21.3865, d_loss: 8.04584980, g_loss: 0.00098451
Epoch: [ 0] [ 14/3165] time: 22.7670, d_loss: 1.93407261, g_loss: 2.32639980
Epoch: [ 0] [ 15/3165] time: 24.1502, d_loss: 8.04065609, g_loss: 0.00085019
Epoch: [ 0] [ 16/3165] time: 25.5354, d_loss: 2.01121569, g_loss: 4.33200264
Epoch: [ 0] [ 17/3165] time: 26.9249, d_loss: 5.53398705, g_loss: 0.01694447
Epoch: [ 0] [ 18/3165] time: 28.3111, d_loss: 1.31883585, g_loss: 5.00018692
Epoch: [ 0] [ 19/3165] time: 29.6976, d_loss: 5.65370369, g_loss: 0.01064641

In addition, only the d_loss is large than g_loss and keep stable, the training process can keep going. However, there will suddenly appear a Nan error during the training process.
Epoch: [ 0] [2344/3165] time: 3322.8894, d_loss: 1.39191413, g_loss: 0.75624585
Epoch: [ 0] [2345/3165] time: 3324.2772, d_loss: 1.60122275, g_loss: 0.36871552
Epoch: [ 0] [2346/3165] time: 3325.6905, d_loss: 1.57876384, g_loss: 0.70225191
Epoch: [ 0] [2347/3165] time: 3327.0963, d_loss: 1.39167571, g_loss: 0.59910935
Epoch: [ 0] [2348/3165] time: 3328.4929, d_loss: 1.43457556, g_loss: 0.60285681
Epoch: [ 0] [2349/3165] time: 3329.8979, d_loss: 1.47647548, g_loss: 0.66025651
Traceback (most recent call last):
File "main.py", line 59, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "main.py", line 43, in main
dcgan.train(FLAGS)
File "/data/project/DCGAN-tensorflow-master/model.py", line 204, in train
feed_dict={ self.z: batch_z })
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 340, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 564, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 637, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 659, in _do_call
e.code)
tensorflow.python.framework.errors.InvalidArgumentError: Nan in summary histogram for: HistogramSummary_2
[[Node: HistogramSummary_2 = HistogramSummary[T=DT_FLOAT, device="/job:localhost/replica:0/task:0/cpu:0"](HistogramSummary_2/tag, Sigmoid_1/126)]]
Caused by op u'HistogramSummary_2', defined at:
File "main.py", line 59, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "main.py", line 40, in main
dataset_name=FLAGS.dataset, is_crop=FLAGS.is_crop, checkpoint_dir=FLAGS.checkpoint_dir)
File "/data/project/DCGAN-tensorflow-master/model.py", line 69, in init
self.build_model()
File "/data/project/DCGAN-tensorflow-master/model.py", line 99, in build_model
self.d__sum = tf.histogram_summary("d
", self.D
)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/logging_ops.py", line 113, in histogram_summary
tag=tag, values=values, name=scope)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_logging_ops.py", line 55, in _histogram_summary
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 655, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2154, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1154, in init
self._traceback = _extract_stack()

any advice?
Thanks and Best regards!

grayscale images, is its loss normal?

,I use 64*64 grayscale images (not RGB) of Chinese characters, When training after a period of training the g_loss is a lot of bigger than d_loss, is it normal?
920c73b4-c315-11e6-83f1-759d5a86b1d4

MNIST training seems not to work

When I pass in mnist as the dataset argument, I get a failure in the definition of the generator, as y is None.
I figured that maybe I should just pass in self.y, as that exists if y_dim is not None, but this doesn't fix the problem either.

Am I missing something silly or is MNIST training just not in a working state?
Thanks for open sourcing this!

I can give more details if needed.

Load Failure around Checkpoint Reading

Hi! Bumping into some problems trying to run this.
If helpful, setup is GTX0180 using Cuda 7.5 with Tensorflow v.8 on Ubuntu 14.04

with a folder of my own images Im getting:

 tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
{'batch_size': 64,
 'beta1': 0.5,
 'checkpoint_dir': 'checkpoint',
 'dataset': 'doom2graphics',
 'epoch': 25,
 'image_size': 108,
 'is_crop': False,
 'is_train': True,
 'learning_rate': 0.0002,
 'sample_dir': 'samples',
 'train_size': inf,
 'visualize': False}
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties: 
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.7335
pciBusID 0000:01:00.0
Total memory: 7.92GiB
Free memory: 6.88GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:755] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0)
 [*] Reading checkpoints...
 [!] Load failed...

and then I get a little more info at the end when I try with the celeb set.

I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
{'batch_size': 64,
 'beta1': 0.5,
 'checkpoint_dir': 'checkpoint',
 'dataset': 'celebA',
 'epoch': 25,
 'image_size': 108,
 'is_crop': True,
 'is_train': True,
 'learning_rate': 0.0002,
 'sample_dir': 'samples',
 'train_size': inf,
 'visualize': False}
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties: 
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.7335
pciBusID 0000:01:00.0
Total memory: 7.92GiB
Free memory: 6.87GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:755] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0)
 [*] Reading checkpoints...
 [!] Load failed...
F tensorflow/stream_executor/cuda/cuda_dnn.cc:427] could not set cudnn filter descriptor: CUDNN_STATUS_BAD_PARAM
F tensorflow/stream_executor/cuda/cuda_dnn.cc:427] could not set cudnn filter descriptor: CUDNN_STATUS_BAD_PARAM
Aborted (core dumped)

noob to this universe - any thoughts? thank you for the time!!

what to expect when we train?

I'm just doing the training on mnist and bunch of stuff that looks like this scroll past

Epoch: [ 1] [ 775/1093] time: 155.6176, d_loss: 1.39283180, g_loss: 0.67132330

how long should I wait? Is the model being saved periodically or does the script ends in a set amount say... 20 epochs and then we can do the testing part? So far I'm just not sure what to expect, should I leave it running for a day? Does it automatically save the ckpt models? Should I ctrl-C out in an hour?

It terminated by itself after awhile.

Epoch: [24] [1092/1093] time: 2298.4984, d_loss: 1.39633536, g_loss: 0.64348722

I'm not sure what to do now actually. I tried to do the test script and nothing happens...

/DCGAN-tensorflow$ python main.py --dataset mnist
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
{'batch_size': 64,
 'beta1': 0.5,
 'c_dim': 3,
 'checkpoint_dir': 'checkpoint',
 'dataset': 'mnist',
 'epoch': 25,
 'image_size': 108,
 'is_crop': False,
 'is_train': False,
 'learning_rate': 0.0002,
 'output_size': 64,
 'sample_dir': 'samples',
 'train_size': inf,
 'visualize': False}
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:925] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties: 
name: GeForce GTX 980M
major: 5 minor: 2 memoryClockRate (GHz) 1.1265
pciBusID 0000:01:00.0
Total memory: 8.00GiB
Free memory: 7.64GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 980M, pci bus id: 0000:01:00.0)
 [*] Reading checkpoints...

It just stops and nothing happens. What should I expect?
thanks!

AttributeError: 'module' object has no attribute 'summary'

I have to believe that this is user error, but I've been bashing my head against it for a bit now. When I try to run the mnist toy example, I get AttributeError: 'module' object has no attribute 'summary'

Running on python 2.7.10 and TensorFlow 0.12.0rc1

Full text below:
python main.py --dataset mnist --is_train True {'batch_size': 64, 'beta1': 0.5, 'c_dim': 3, 'checkpoint_dir': 'checkpoint', 'dataset': 'mnist', 'epoch': 25, 'image_size': 108, 'is_crop': False, 'is_train': True, 'learning_rate': 0.0002, 'output_size': 64, 'sample_dir': 'samples', 'train_size': inf, 'visualize': False} can't determine number of CPU cores: assuming 4 I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 4 can't determine number of CPU cores: assuming 4 I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 4 Traceback (most recent call last): File "main.py", line 60, in <module> tf.app.run() File "/usr/local/lib/python2.7/site-packages/tensorflow/python/platform/default/_app.py", line 11, in run sys.exit(main(sys.argv)) File "main.py", line 38, in main dataset_name=FLAGS.dataset, is_crop=FLAGS.is_crop, checkpoint_dir=FLAGS.checkpoint_dir, sample_dir=FLAGS.sample_dir) File "/Users/ckruse/Documents/Python/DCGAN-tensorflow-master/model.py", line 67, in __init__ self.build_model() File "/Users/ckruse/Documents/Python/DCGAN-tensorflow-master/model.py", line 80, in build_model self.z_sum = tf.summary.histogram("z", self.z)

changing image_shape causes shape mismatch

having trouble understanding how to change the resolution of the generated images. it appears they are hardcoded to 64. changing image_shape from [64, 64, 3] to [96, 96, 3] and transform/center_crop to 96, i get the error pasted below.

i know i need to change the architecture of the generator/discriminator to operate on different-sized images, but it's not as simple as switching the 64s to 96s. is there a straightforward way to parameterize main.py so it can generate images of different resolutions?

Traceback (most recent call last):
File "main.py", line 58, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "main.py", line 39, in main
dataset_name=FLAGS.dataset, is_crop=FLAGS.is_crop, checkpoint_dir=FLAGS.checkpoint_dir)
File "/home/ubuntu/DCGAN-tensorflow/model.py", line 64, in init
self.build_model()
File "/home/ubuntu/DCGAN-tensorflow/model.py", line 83, in build_model
self.D_, self.D_logits_ = self.discriminator(self.G, reuse=True)
File "/home/ubuntu/DCGAN-tensorflow/model.py", line 195, in discriminator
h4 = linear(tf.reshape(h3, [self.batch_size, -1]), 1, 'd_h3_lin')
File "/home/ubuntu/DCGAN-tensorflow/ops.py", line 116, in linear
tf.random_normal_initializer(stddev=stddev))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 830, in get_variable
custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 673, in get_variable
custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 217, in get_variable
validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 202, in _true_getter
caching_device=caching_device, validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 499, in _get_single_variable
found_var.get_shape()))
ValueError: Trying to share variable d_h3_lin/Matrix, but specified shape (8192, 1) and found shape (18432, 1).

Intermediate Layer is Ignored

In lines 344-345 of model.py, we calculate h0 as follows:

h0 = tf.nn.relu(self.g_bn0(linear(z, self.gfc_dim, 'g_h0_lin')))
h0 = tf.concat(1, [h0, y])

But this output, h0 is not used anywhere in the generator, as far as I can tell.

It seems like in the proceeding line:

h1 = tf.nn.relu(self.g_bn1(linear(z, self.gf_dim*2*s4*s4, 'g_h1_lin'), train=False))

We ought to replace z with h0? Am I missing something?

'module' object has no attribute 'imread'

Traceback (most recent call last):
File "main.py", line 60, in
tf.app.run()
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "main.py", line 44, in main
dcgan.train(FLAGS)
File "/home/caocao/DCGAN-tensorflow/model.py", line 150, in train
sample = [get_image(sample_file, self.image_size, is_crop=self.is_crop, resize_w=self.output_size, is_grayscale = self.is_grayscale) for sample_file in sample_files]
File "/home/caocao/DCGAN-tensorflow/model.py", line 150, in
sample = [get_image(sample_file, self.image_size, is_crop=self.is_crop, resize_w=self.output_size, is_grayscale = self.is_grayscale) for sample_file in sample_files]
File "/home/caocao/DCGAN-tensorflow/utils.py", line 18, in get_image
return transform(scipy.misc.imread(image_path, is_grayscale), image_size, is_crop, resize_w)

How to create your own dataset?

I would like to create my own dataset, where I can find information about it?
And how is better to auto-crop for 1000 face-photos?

tensorflow version error

When I upgrade tensorflow to the latest version, I type python main.py --dataset mnist --is_train True
Traceback (most recent call last):
File "main.py", line 60, in
tf.app.run()
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 43, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "main.py", line 38, in main
dataset_name=FLAGS.dataset, is_crop=FLAGS.is_crop, checkpoint_dir=FLAGS.checkpoint_dir, sample_dir=FLAGS.sample_dir)
File "/Users/apple/Desktop/test/DCGAN-tensorflow/model.py", line 67, in init
self.build_model()
File "/Users/apple/Desktop/test/DCGAN-tensorflow/model.py", line 86, in build_model
self.sampler = self.sampler(self.z, self.y)
File "/Users/apple/Desktop/test/DCGAN-tensorflow/model.py", line 354, in sampler
h0 = tf.nn.relu(self.g_bn0(linear(z, self.gfc_dim, 'g_h0_lin')))
File "/Users/apple/Desktop/test/DCGAN-tensorflow/ops.py", line 34, in call
ema_apply_op = self.ema.apply([batch_mean, batch_var])
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/moving_averages.py", line 391, in apply
self._averages[var], var, decay, zero_debias=zero_debias))
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/moving_averages.py", line 70, in assign_moving_average
update_delta = _zero_debias(variable, value, decay)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/moving_averages.py", line 177, in _zero_debias
trainable=False)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1024, in get_variable
custom_getter=custom_getter)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 850, in get_variable
custom_getter=custom_getter)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 346, in get_variable
validate_shape=validate_shape)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 331, in _true_getter
caching_device=caching_device, validate_shape=validate_shape)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 650, in _get_single_variable
"VarScope?" % name)
ValueError: Variable g_bn0/g_bn0_2/g_bn0_2/moments_1/moments_1/mean/ExponentialMovingAverage/biased does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?

The previous tensorflow is OK. Would you please tell me which version you choose?

attribute error while loading model for test

python main.py --dataset mnist --visualize True
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

[*] Reading checkpoints...
Traceback (most recent call last):
File "main.py", line 60, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "main.py", line 49, in main
to_json("./web/js/layers.js", [dcgan.h0_w, dcgan.h0_b, dcgan.g_bn0],
AttributeError: 'DCGAN' object has no attribute 'h0_w'

Possible issue between line 300 to 320, for generator based on y

h0 = tf.nn.relu(self.g_bn0(linear(z, self.gfc_dim, 'g_h0_lin')))
h0 = tf.concat(1, [h0, y])

h1 = tf.nn.relu(self.g_bn1(linear(z, self.gf_dim_2_s4*s4, 'g_h1_lin')))
h1 = tf.reshape(h1, [self.batch_size, s4, s4, self.gf_dim * 2])

h1 directly uses linear of z rather than h0. I guess this is a bug? :) Also same thing for following layers.

failed to visualize

When I set FLAGS.visualize to True, it occurs an error as follows. How can I fix it please?

Traceback (most recent call last):
File "main.py", line 60, in
tf.app.run()
File "/home/clay/anaconda2/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 43, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "main.py", line 57, in main
visualize(sess, dcgan, FLAGS, OPTION)
File "/home/clay/code/DCGAN-tensorflow/utils.py", line 171, in visualize
samples = sess.run(dcgan.sampler, feed_dict={dcgan.z: z_sample})
File "/home/clay/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 766, in run
run_metadata_ptr)
File "/home/clay/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 964, in _run
feed_dict_string, options, run_metadata)
File "/home/clay/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1014, in _do_run
target_list, options, run_metadata)
File "/home/clay/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1034, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'y' with dtype float and shape [64,10]
[[Node: y = Placeholderdtype=DT_FLOAT, shape=[64,10], _device="/job:localhost/replica:0/task:0/gpu:0"]]
[[Node: Sigmoid_2/_75 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_228_Sigmoid_2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]]

Caused by op u'y', defined at:
File "main.py", line 60, in
tf.app.run()
File "/home/clay/anaconda2/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 43, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "main.py", line 38, in main
dataset_name=FLAGS.dataset, is_crop=FLAGS.is_crop, checkpoint_dir=FLAGS.checkpoint_dir, sample_dir=FLAGS.sample_dir)
File "/home/clay/code/DCGAN-tensorflow/model.py", line 67, in init
self.build_model()
File "/home/clay/code/DCGAN-tensorflow/model.py", line 71, in build_model
self.y= tf.placeholder(tf.float32, [self.batch_size, self.y_dim], name='y')
File "/home/clay/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1633, in placeholder
name=name)
File "/home/clay/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2043, in _placeholder
name=name)
File "/home/clay/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
op_def=op_def)
File "/home/clay/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2371, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/clay/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1258, in init
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'y' with dtype float and shape [64,10]
[[Node: y = Placeholderdtype=DT_FLOAT, shape=[64,10], _device="/job:localhost/replica:0/task:0/gpu:0"]]
[[Node: Sigmoid_2/_75 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_228_Sigmoid_2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]]

lrelu complexity?

Why did you implement lrelu the way you did?

Would this not be more efficient?

tf.maximum(x, leak*x)

AttributeError: 'module' object has no attribute 'imread'

Hello,
I just installed Tensorflow and it seems that there's a problem. I got an error when I executed the script with default parameters:
python main.py --dataset celebA --is_train True --is_crop True

Configuration:

  • OS: Ubuntu 16.04, 64 bits
  • Python version 2.7
  • Anaconda 2-4.0.0
  • Tensorflow 0.8.0 (installed in conda env)

Output:

{'batch_size': 64,
 'beta1': 0.5,
 'checkpoint_dir': 'checkpoint',
 'dataset': 'celebA',
 'epoch': 25,
 'image_size': 108,
 'is_crop': True,
 'is_train': True,
 'learning_rate': 0.0002,
 'sample_dir': 'samples',
 'train_size': inf}
WARNING:tensorflow:Passing a `GraphDef` to the SummaryWriter is deprecated. Pass a `Graph` object instead, such as `sess.graph`.
Traceback (most recent call last):
  File "main.py", line 54, in <module>
    tf.app.run()
  File "/home/amine/anaconda2/envs/tensorflow_cpu/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
    sys.exit(main(sys.argv))
  File "main.py", line 39, in main
    dcgan.train(FLAGS)
  File "/home/amine/workspace/DCGAN-tensorflow/model.py", line 125, in train
    sample = [get_image(sample_file, self.image_size, is_crop=self.is_crop) for sample_file in sample_files]
  File "/home/amine/workspace/DCGAN-tensorflow/utils.py", line 17, in get_image
    return transform(imread(image_path), image_size, is_crop)
  File "/home/amine/workspace/DCGAN-tensorflow/utils.py", line 23, in imread
    return scipy.misc.imread(path).astype(np.float)
AttributeError: 'module' object has no attribute 'imread'

Wrong samples when using GPU

Previously, I was using the code with CPU support and everything was working fine. Now, after adding GPU support on tensorflow, the created samples are completely wrong (white noise images).

Anyone had any similar issue or an idea about what is going on?

What's mean of "y_dim"

Hi ,
I just read the paper and your code , and I find "y_dim" in your model.py function of generator , Is that the label for vector z?
and just like "LAPGAN" , the y_dim is just like conditioning variable which aim is to control the generator model?

Failing to load data during Training on CelebA

Hello,

When I execute the program in the test mode, it works:
python main.py --dataset celebA --is_crop True

But when I execute it in the training mode, it fails to load the data:
python main.py --dataset celebA --is_train True --is_crop True

Here's the error message:

python main.py --dataset celebA --is_train True --is_crop True
{'batch_size': 64,
 'beta1': 0.5,
 'checkpoint_dir': 'checkpoint',
 'dataset': 'celebA',
 'epoch': 25,
 'image_size': 108,
 'is_crop': True,
 'is_train': True,
 'learning_rate': 0.0002,
 'sample_dir': 'samples',
 'train_size': inf}
WARNING:tensorflow:Passing a `GraphDef` to the SummaryWriter is deprecated. Pass a `Graph` object instead, such as `sess.graph`.
 [*] Reading checkpoints...
 [!] Load failed...
Traceback (most recent call last):
  File "main.py", line 58, in <module>
    tf.app.run()
  File "/home/amine/anaconda2/envs/tensorflow_cpu/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
    sys.exit(main(sys.argv))
  File "main.py", line 40, in main
    dcgan.train(FLAGS)
  File "/home/amine/workspace/DCGAN-tensorflow/model.py", line 150, in train
    feed_dict={ self.images: batch_images, self.z: batch_z })
  File "/home/amine/anaconda2/envs/tensorflow_cpu/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 340, in run
    run_metadata_ptr)
  File "/home/amine/anaconda2/envs/tensorflow_cpu/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 564, in _run
    feed_dict_string, options, run_metadata)
  File "/home/amine/anaconda2/envs/tensorflow_cpu/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 637, in _do_run
    target_list, options, run_metadata)
  File "/home/amine/anaconda2/envs/tensorflow_cpu/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 659, in _do_call
    e.code)
tensorflow.python.framework.errors.InvalidArgumentError: tags and values not the same shape: [] != [64,1] (tag 'd_loss_real')
     [[Node: ScalarSummary = ScalarSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](ScalarSummary/tags, logistic_loss)]]
Caused by op u'ScalarSummary', defined at:
  File "main.py", line 58, in <module>
    tf.app.run()
  File "/home/amine/anaconda2/envs/tensorflow_cpu/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
    sys.exit(main(sys.argv))
  File "main.py", line 37, in main
    dataset_name=FLAGS.dataset, is_crop=FLAGS.is_crop, checkpoint_dir=FLAGS.checkpoint_dir)
  File "/home/amine/workspace/DCGAN-tensorflow/model.py", line 62, in __init__
    self.build_model()
  File "/home/amine/workspace/DCGAN-tensorflow/model.py", line 91, in build_model
    self.d_loss_real_sum = tf.scalar_summary("d_loss_real", self.d_loss_real)
  File "/home/amine/anaconda2/envs/tensorflow_cpu/lib/python2.7/site-packages/tensorflow/python/ops/logging_ops.py", line 234, in scalar_summary
    val = gen_logging_ops._scalar_summary(tags=tags, values=values, name=scope)
  File "/home/amine/anaconda2/envs/tensorflow_cpu/lib/python2.7/site-packages/tensorflow/python/ops/gen_logging_ops.py", line 181, in _scalar_summary
    name=name)
  File "/home/amine/anaconda2/envs/tensorflow_cpu/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 655, in apply_op
    op_def=op_def)
  File "/home/amine/anaconda2/envs/tensorflow_cpu/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2154, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/amine/anaconda2/envs/tensorflow_cpu/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1154, in __init__
    self._traceback = _extract_stack()

the loss is not correct

In this block:

    self.d_loss_real = binary_cross_entropy_with_logits(tf.ones_like(self.D), self.D)
    self.d_loss_fake = binary_cross_entropy_with_logits(tf.zeros_like(self.D_), self.D_)
    self.g_loss = binary_cross_entropy_with_logits(tf.ones_like(self.D_), self.D_)

You are using self.D and self.D_ as the logits, but they are probabilities, not the logits. The logits are the value that come before the sigmoid.

Your binary_cross_entropy_with_logits function is just entirely wrong and should probably be deleted. You should use tf.nn.sigmoid_cross_entropy_with_logits instead.

AttributeError: __float__

File "main.py", line 61, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "main.py", line 45, in main
dcgan.train(FLAGS)
File "/home/caocao/DCGAN-tensorflow/model.py", line 150, in train
sample = [get_image(sample_file, self.image_size, is_crop=self.is_crop, resize_w=self.output_size, is_grayscale = self.is_grayscale) for sample_file in sample_files]
File "/home/caocao/DCGAN-tensorflow/utils.py", line 22, in get_image
return transform(imread(image_path, is_grayscale), image_size, is_crop, resize_w)
File "/home/caocao/DCGAN-tensorflow/utils.py", line 31, in imread
print(scipy.misc.imread(path).astype(np.float32))
File "/usr/local/lib/python2.7/dist-packages/PIL/Image.py", line 512, in getattr
raise AttributeError(name)

'python main.py --dataset celebA --is_crop True' failed

envy@ub1404:/media/envy/data1t/os_prj/github/DCGAN-tensorflow$ python main.py --dataset celebA --is_crop True
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
{'batch_size': 64,
'beta1': 0.5,
'checkpoint_dir': 'checkpoint',
'dataset': 'celebA',
'epoch': 25,
'image_size': 108,
'is_crop': True,
'is_train': False,
'learning_rate': 0.0002,
'sample_dir': 'samples',
'train_size': inf}
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:900] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: GeForce GTX 950M
major: 5 minor: 0 memoryClockRate (GHz) 1.124
pciBusID 0000:01:00.0
Total memory: 4.00GiB
Free memory: 3.63GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:755] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 950M, pci bus id: 0000:01:00.0)
[*] Reading checkpoints...
Traceback (most recent call last):
File "main.py", line 54, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "main.py", line 47, in main
[dcgan.h4_w, dcgan.h4_b, None])
File "/media/envy/data1t/os_prj/github/DCGAN-tensorflow/utils.py", line 69, in to_json
B = b.eval()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 436, in eval
return self._variable.eval(session=session)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 502, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3334, in _eval_using_default_session
return session.run(tensors, feed_dict)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 340, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 564, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 637, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 659, in _do_call
e.code)
tensorflow.python.framework.errors.FailedPreconditionError: Attempting to use uninitialized value g_h0_lin/bias
[[Node: g_h0_lin/bias/_1 = _SendT=DT_FLOAT, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_1655_g_h0_lin/bias", _device="/job:localhost/replica:0/task:0/gpu:0"]]
[[Node: g_h0_lin/bias/_2 = _Recv_start_time=0, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_1655_g_h0_lin/bias", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]]
envy@ub1404:/media/envy/data1t/os_prj/github/DCGAN-tensorflow$

spatial_conv

function spatial_conv in your discriminator is not been defined

Can you explain the code in this project?

if np.mod(counter, 100) == 1: if config.dataset == 'mnist': samples, d_loss, g_loss = self.sess.run( [self.sampler, self.d_loss, self.g_loss], feed_dict={self.z: sample_z, self.images: sample_images, self.y:batch_labels} ) else:

when sampling , the feeding of y is batch_labels for mnist training?
I think should be sample_labels?

error in training

self.z_sum = tf.summary.histogram("z", self.z)
AttributeError: 'module' object has no attribute 'histogram'

What can one do with an existing model?

I'm curious about how one might use this without --is_train. I can load a model successfully, but it's not clear if there is anything in place to use it.

For instance, I would like to explore the latent space in a given region and show its linearity. Is there any simple way to input different values and output images?

Generating Photo-realistic Avatars with DCGAN

@carpedm20 I want to thank you for making your code available. I know you said that your code is kind of old, but nonetheless I got a lot of mileage out of it and was able to do some interesting research about using DCGAN to create photo-realistic avatars from small amount of images or videos.

Here are some quick results showing synthesized Adele expression out of a small video segment.

A smile A smile
A growl Adele growls

My full report can be found here: http://www.terraai.org/avatars/

ValueError: Variable d_h0_conv/w/Adam/ does not exist

when running on mnist dataset, I got this error.
ValueError: Variable d_h0_conv/w/Adam/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?
Can you help me to explain it?

Thanks

why I have This problem

Sorry to bother you I have this problem few days and still can not find a solution . my tensorflow 0.11 I run python main.py --dataset mnist --is_train True , this problem occurs:
ValueError: Variable g_bn0/g_bn0_2/g_bn0_2/moments_1/moments_1/mean/ExponentialMovingAverage/biased does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?
can you help me thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.