GithubHelp home page GithubHelp logo

ganbasics's People

Contributors

nicknochnack avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ganbasics's Issues

expects 1 input(s), but it received 2 input tensors

Hi,

Love the video's, I'm learning a lot. However, I tried to combine the video about fashionGAN, and the one about imageClassification with a custom dataset. I'm getting the following error, and I'm wondering if I need to dive deeper into the technicalities, or if I'm just missing something.


---------------------------------------------------------------------------

ValueError                                Traceback (most recent call last)

<ipython-input-61-b0f21fd11f47> in <cell line: 4>()
      2 # data = ds.as_numpy_iterator()
      3 
----> 4 hist = fashgan.fit(train, epochs=1, callbacks=[ModelMonitor()])

1 frames

/usr/local/lib/python3.10/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
   1145           except Exception as e:  # pylint:disable=broad-except
   1146             if hasattr(e, "ag_error_metadata"):
-> 1147               raise e.ag_error_metadata.to_exception(e)
   1148             else:
   1149               raise

ValueError: in user code:

    File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1021, in train_function  *
        return step_function(self, iterator)
    File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1010, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1000, in run_step  **
        outputs = model.train_step(data)
    File "<ipython-input-31-1f21f98ff43d>", line 29, in train_step
        yhat_real = self.discriminator(real_images, training=True)
    File "/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
        raise e.with_traceback(filtered_tb) from None
    File "/usr/local/lib/python3.10/dist-packages/keras/engine/input_spec.py", line 200, in assert_input_compatibility
        raise ValueError(f'Layer "{layer_name}" expects {len(input_spec)} input(s),'

    ValueError: Layer "sequential_1" expects 1 input(s), but it received 2 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 256, 256, 1) dtype=float32>, <tf.Tensor 'IteratorGetNext:1' shape=(None,) dtype=int32>]

Do you have any idea?
The entire book is available at https://colab.research.google.com/drive/1CofIUYzR7xUsjWfWjDsQ2_scut6xtL8V?usp=sharing

thanks!

UnimplementedError

UnimplementedError Traceback (most recent call last)
in ()
----> 1 img = generator.predict(np.random.randn(4,128,1))

1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
53 ctx.ensure_initialized()
54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:

UnimplementedError: Graph execution error:

Detected at node 'sequential/conv2d/Conv2D' defined at (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance
app.start()
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start
self.io_loop.start()
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
self._run_once()
File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
handle._run()
File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events
handler_func(fileobj, events)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 577, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 606, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 556, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
img = generator.predict(np.random.randn(4,128,1))
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1982, in predict
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1801, in predict_function
logic to Model.predict_step.
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1790, in step_function
x, _, _ = data_adapter.unpack_x_y_sample_weight(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1783, in run_step
Args:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1751, in predict_step
self.reset_metrics()
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py", line 1096, in call
"""
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/sequential.py", line 374, in call
return super(Sequential, self).call(inputs, training=training, mask=mask)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/functional.py", line 452, in call
either a tensor or None (no mask).
File "/usr/local/lib/python3.7/dist-packages/keras/engine/functional.py", line 589, in _run_internal_graph
if node.is_input:
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py", line 1096, in call
"""
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/layers/convolutional.py", line 248, in call
File "/usr/local/lib/python3.7/dist-packages/keras/layers/convolutional.py", line 240, in convolution_op
Node: 'sequential/conv2d/Conv2D'
DNN library is not found.
[[{{node sequential/conv2d/Conv2D}}]] [Op:__inference_predict_function_141585]

generator and discriminator input shape issue

def build_generator():
model = Sequential()
# Takes in random values and reshapes it to 7x7x128
# Beginnings of a generated image
model.add(Dense(77128, input_dim=128))
model.add(LeakyReLU(0.2))
model.add(Reshape((7,7,128)))

Model: "sequential_5"
Layer (type) Output Shape Param
dense_5 (Dense) (None, 6272) 809088

def build_discriminator():
model = Sequential()
model.add(Conv2D(32, 5, input_shape=(28,28,1)))
model.add(LeakyReLU(.2))
model.add(Dropout(.4))

Model: "sequential_8"
Layer (type) Output Shape Param
conv2d_33 (Conv2D) (None, 24, 24, 32) 832

Based on the summary of both generator and discriminator, the input dim for the generator is 2, and the input dim for the discriminator is 4. However, the codes were defined with input of generator and discriminator both 3 and 3, which ended up with error message. Hence, suggesting to change the code with generator and discriminator input img = generator.predict(np.random.randn(4,128))

ValueError: Input 0 of layer "sequential_36" is incompatible with the layer: expected shape=(None, 256, 1), found shape=(None, 128, 1)

def build_generator():
model = Sequential()

#take in random vlaues and reshape it to 7x7x128
#Beginnings of the generated images

model.add(Dense(77128 , input_dim = 128))
model.add(LeakyReLU(0.2))
model.add(Reshape((7,7,128)))

Upsampling block 1

model.add(UpSampling2D()) # 7x7x128 --> 14x14x128
model.add(Conv2D(128 , 5 , padding = 'same'))
model.add(LeakyReLU(0.2))

this make more suffication in layer and create more parameter

'''
model.add(UpSampling2D())
model.add(Conv2D(1 , 5 , padding = 'same'))
model.add(LeakyReLU(0.2))
'''

Upsampling block 2

model.add(UpSampling2D())
model.add(Conv2D(128 , 5 , padding = 'same'))
model.add(LeakyReLU(0.2))

Convolutional block 1

model.add(Conv2D(128, 4, padding='same'))
model.add(LeakyReLU(0.2))

Convolutional block 2

model.add(Conv2D(128, 4, padding='same'))
model.add(LeakyReLU(0.2))

Conv layer to get to one channel

model.add(Conv2D(1, 4, padding='same', activation='sigmoid'))

return model

and i use weight of given h5 file and it giving me error of

image

cgan with custom images from gasf several issues with expected input

i found your video's on youtube and decided to model my project after your work. I am a novice at computer science

import tensorflow as tf

Define the parameters

batch_size = 32
img_height = 128
img_width = 128

Function to preprocess images (convert to grayscale and normalize)

def preprocess_images(image, label):
# Convert RGB image to grayscale
#image = tf.image.rgb_to_grayscale(image)
# Normalize pixel values to [0, 1]
image = tf.cast(image, tf.float32) / 255.0
return image, label

Load the training data

train_ds = tf.keras.preprocessing.image_dataset_from_directory(
'/content/drive/MyDrive/gasf_images_cgan/',
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)

Load the validation data

val_ds = tf.keras.preprocessing.image_dataset_from_directory(
'/content/drive/MyDrive/gasf_images_cgan/',
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)

train_ds = train_ds.map(preprocess_images)
val_ds = val_ds.map(preprocess_images)

Normalize pixel values to [0, 1]

normalization_layer = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)

Preprocess the dataset: convert to float32, normalize, and resize

train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
val_ds = val_ds.map(lambda x, y: (normalization_layer(x), y))

Additional preprocessing: convert to grayscale and normalize

train_ds = train_ds.map(preprocess_images)
val_ds = val_ds.map(preprocess_images)

Use cache(), shuffle(), batch(), and prefetch() operations

#train_ds = train_ds.cache().shuffle(1000).batch(batch_size).prefetch(buffer_size=tf.data.AUTOTUNE)
#val_ds = val_ds.cache().prefetch(buffer_size=tf.data.AUTOTUNE)

Verify the shape of the dataset

for images, labels in train_ds.take(1):
print("Shape of image batch: ", images.shape)
print("Shape of label batch: ", labels.shape)

Found 6758 files belonging to 12 classes.
Using 5407 files for training.
Found 6758 files belonging to 12 classes.
Using 1351 files for validation.
Shape of image batch: (32, 128, 128, 3)
Shape of label batch: (32,)

import os
import tensorflow as tf
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras.callbacks import Callback
from tensorflow.keras.preprocessing.image import array_to_img

def build_generator():
# Input noise
noise = layers.Input(shape=(128,))
# Conditioning label
label = layers.Input(shape=(1,))

# Embed the label and reshape to match noise dimensions
label_embedding = layers.Embedding(input_dim=10, output_dim=128)(label)
label_embedding = layers.Flatten()(label_embedding)

# Concatenate noise and label embedding as generator input
model_input = layers.Concatenate()([noise, label_embedding])

x = layers.Dense(64 * 64 * 128, use_bias=False)(model_input)
x = layers.BatchNormalization()(x)
x = layers.LeakyReLU()(x)

x = layers.Reshape((64, 64, 128))(x)  # Reshape to desired image dimensions

x = layers.Conv2DTranspose(64, (5, 5), strides=(1, 1), padding='same', use_bias=False)(x)
x = layers.BatchNormalization()(x)
x = layers.LeakyReLU()(x)

x = layers.Conv2DTranspose(3, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh')(x)

model = models.Model([noise, label], x)
return model

Define the discriminator model

def build_discriminator():
model = models.Sequential()
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same', input_shape=(128, 128, 3)))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))

model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))

model.add(layers.Flatten())
model.add(layers.Dense(1))

return model

Define the RT_IOTcGAN model

Define the RT_IOTcGAN model

class RT_IOTcGAN(tf.keras.Model):
def init(self, generator, discriminator):
super(RT_IOTcGAN, self).init()
self.generator = generator
self.discriminator = discriminator

def compile(self, g_opt, d_opt, g_loss, d_loss):
    super(RT_IOTcGAN, self).compile()
    self.g_opt = g_opt
    self.d_opt = d_opt
    self.g_loss = g_loss
    self.d_loss = d_loss

def train_step(self, real_images):
    # Unpack the batched real_images
    real_images, _ = real_images  # Assuming real_images is a tuple (images, labels)
    batch_size = tf.shape(real_images)[0]

    with tf.GradientTape() as d_tape:
        noise = tf.random.normal((batch_size, 128))
        fake_images = self.generator(noise, training=True)

        real_output = self.discriminator(real_images, training=True)
        fake_output = self.discriminator(fake_images, training=True)

        d_loss_real = self.d_loss(tf.ones_like(real_output), real_output)
        d_loss_fake = self.d_loss(tf.zeros_like(fake_output), fake_output)
        total_d_loss = d_loss_real + d_loss_fake

    d_grads = d_tape.gradient(total_d_loss, self.discriminator.trainable_weights)
    self.d_opt.apply_gradients(zip(d_grads, self.discriminator.trainable_weights))

    with tf.GradientTape() as g_tape:
        noise = tf.random.normal((batch_size, 128))
        fake_images = self.generator(noise, training=True)
        fake_output = self.discriminator(fake_images, training=True)

        g_loss = self.g_loss(tf.ones_like(fake_output), fake_output)

    g_grads = g_tape.gradient(g_loss, self.generator.trainable_weights)
    self.g_opt.apply_gradients(zip(g_grads, self.generator.trainable_weights))

    return {"d_loss": total_d_loss, "g_loss": g_loss}

Assuming build_generator() and build_discriminator() functions are defined elsewhere

Create an instance of the generator and discriminator

generator = build_generator()
discriminator = build_discriminator()

Create an instance of RT_IOTcGAN

rtiotcgan = RT_IOTcGAN(generator, discriminator)

Compile the model

g_opt = tf.keras.optimizers.Adam(learning_rate=0.0001)
d_opt = tf.keras.optimizers.Adam(learning_rate=0.00001)
g_loss = tf.keras.losses.BinaryCrossentropy()
d_loss = tf.keras.losses.BinaryCrossentropy()

rtiotcgan.compile(g_opt, d_opt, g_loss, d_loss)

Define the ModelMonitor callback

class ModelMonitor(Callback):
def init(self, num_img=3, latent_dim=128):
self.num_img = num_img
self.latent_dim = latent_dim

def on_epoch_end(self, epoch, logs=None):
    random_latent_vectors = tf.random.normal((self.num_img, self.latent_dim))
    generated_images = self.model.generator(random_latent_vectors, training=False)
    generated_images *= 255
    generated_images = generated_images.numpy()
    for i in range(self.num_img):
        img = array_to_img(generated_images[i])
        img.save(os.path.join('images', f'generated_img_epoch_{epoch}_sample_{i}.png'))

Create an instance of ModelMonitor callback

model_monitor = ModelMonitor(num_img=3, latent_dim=128)

Train the model

hist = rtiotcgan.fit(train_ds, epochs=200, callbacks=[model_monitor])

the code never runs several errors about expected input. i would appreciate all the help i can get. please and thank you

Error loading model weights | FashionGAN-Tutorial.ipynb

I tried loading the given weights but am constantly getting the below error. I am using
keras 2.8.0
tensorflow-gpu 2.8.0
tensorflow 2.8.0

Below is the code I am using to build the generator model

def build_generator(): 
    model = Sequential()
    
    # Takes in random values and reshapes it to 7x7x128
    # Beginnings of a generated image
    model.add(Dense(7*7*128,input_dim=128))
    # model.add(Dense(7*7*128,input_shape=(128,)))
    model.add(LeakyReLU(0.2))
    model.add(Reshape((7,7,128)))
    
    # Upsampling block 1 
    model.add(UpSampling2D())
    model.add(Conv2D(128, 5, padding='same'))
    model.add(LeakyReLU(0.2))
    
    # Upsampling block 2 
    model.add(UpSampling2D())
    model.add(Conv2D(128, 5, padding='same'))
    model.add(LeakyReLU(0.2))
    
    # Convolutional block 1
    model.add(Conv2D(128, 4, padding='same'))
    model.add(LeakyReLU(0.2))
    
    # Convolutional block 2
    model.add(Conv2D(128, 4, padding='same'))
    model.add(LeakyReLU(0.2))
    
    # Conv layer to get to one channel
    model.add(Conv2D(1, 4, padding='same', activation='sigmoid'))
    model.summary()
    return model

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.