To develop a convolutional autoencoder for image denoising application.
Using autoencoder, we are trying to remove the noise added in the encoder part and tend to get the output which should be same as the input with minimal loss. The dataset which is used is mnist dataset.
![image](https://private-user-images.githubusercontent.com/94836154/329828217-f19679e1-55e6-4496-a5f4-8440e82f077c.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjExOTc0OTQsIm5iZiI6MTcyMTE5NzE5NCwicGF0aCI6Ii85NDgzNjE1NC8zMjk4MjgyMTctZjE5Njc5ZTEtNTVlNi00NDk2LWE1ZjQtODQ0MGU4MmYwNzdjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzE3VDA2MTk1NFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWMwMzNlZjUwZDYzYTZlOTQ0ZTI5Y2Y5MWIxMTFkN2RjOWY1NWJmYTIzNjNhYTI0NjY2MzYzNDRlZjI1ODA5MjkmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.AUQNlHJun0bJ4sYVb62rNZ4ikTGyWQ3O21TiOWeUGw4)
Step 1: Import the necessary libraries and dataset.
Step 2: Load the dataset and scale the values for easier computation.
Step 3: Add noise to the images randomly for both the train and test sets.
Step 4: Build the Neural Model using Convolutional Layer, Pooling Layer, Up Sampling Layer.
Step 5: Make sure the input shape and output shape of the model are identical.
Step 6: Pass test data for validating manually.
Step 7: Plot the predictions for visualization.
Name: Marella Dharanesh
Reg. NO: 212222240062
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import utils
from tensorflow.keras import models
from tensorflow.keras.datasets import mnist
import numpy as np
import matplotlib.pyplot as plt
(x_train, _), (x_test, _) = mnist.load_data()
x_train.shape
x_train_scaled = x_train.astype('float32') / 255.
x_test_scaled = x_test.astype('float32') / 255.
x_train_scaled = np.reshape(x_train_scaled, (len(x_train_scaled), 28, 28, 1))
x_test_scaled = np.reshape(x_test_scaled, (len(x_test_scaled), 28, 28, 1))
noise_factor = 0.5
x_train_noisy = x_train_scaled + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train_scaled.shape)
x_test_noisy = x_test_scaled + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_test_scaled.shape)
x_train_noisy = np.clip(x_train_noisy, 0., 1.)
x_test_noisy = np.clip(x_test_noisy, 0., 1.)
n = 7
plt.figure(figsize=(20, 3))
for i in range(1, n + 1):
ax = plt.subplot(1, n, i)
plt.imshow(x_test_noisy[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
input_img = keras.Input(shape=(28, 28, 1))
x=layers.Conv2D(32,(3,3),activation='relu',padding='same')(input_img)
x=layers.MaxPooling2D((2, 2), padding='same')(x)
x=layers.Conv2D(32,(3,3),activation='relu',padding='same')(x)
encoded = layers.MaxPooling2D((2, 2), padding='same')(x)
x=layers.Conv2D(32,(3,3),activation='relu',padding='same')(encoded)
x=layers.UpSampling2D((2,2))(x)
x=layers.Conv2D(32,(3,3),activation='relu',padding='same')(x)
x=layers.UpSampling2D((2,2))(x)
decoded = layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = keras.Model(input_img, decoded)
autoencoder.summary()
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.fit(x_train_noisy, x_train_scaled,
epochs=5,
batch_size=128,
shuffle=True,
validation_data=(x_test_noisy, x_test_scaled))
import pandas as pd
metrics = pd.DataFrame(autoencoder.history.history)
metrics.head()
metrics[['loss','val_loss']].plot()
decoded_imgs = autoencoder.predict(x_test_noisy)
n = 7
plt.figure(figsize=(20, 3))
for i in range(1, n + 1):
# Display original
ax = plt.subplot(3, n, i)
plt.imshow(x_test_scaled[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# Display noisy
ax = plt.subplot(3, n, i+n)
plt.imshow(x_test_noisy[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# Display reconstruction
ax = plt.subplot(3, n, i + 2*n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
![image](https://private-user-images.githubusercontent.com/94836154/329827524-9375c45f-67ef-4fd1-a701-7c526cfec2b5.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjExOTc0OTQsIm5iZiI6MTcyMTE5NzE5NCwicGF0aCI6Ii85NDgzNjE1NC8zMjk4Mjc1MjQtOTM3NWM0NWYtNjdlZi00ZmQxLWE3MDEtN2M1MjZjZmVjMmI1LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzE3VDA2MTk1NFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTBhM2JiOGQ2NWRmZmZmNjY5MmNlYmNjMjkwODJiNWVlODI5MGM1MzE5MDhlMmZlMGY3ZDliYzI4OWY2Mjk3MjYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.QYRFzg4JoJnZ3NH98B2eTvYbF9Ou4P9H5ppPT0WWVP8)
![image](https://private-user-images.githubusercontent.com/94836154/329827551-9774e3f1-5bd1-4dad-a927-0f95ec1f0a92.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjExOTc0OTQsIm5iZiI6MTcyMTE5NzE5NCwicGF0aCI6Ii85NDgzNjE1NC8zMjk4Mjc1NTEtOTc3NGUzZjEtNWJkMS00ZGFkLWE5MjctMGY5NWVjMWYwYTkyLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzE3VDA2MTk1NFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTgxZmViNjk5ZDE0YWU4MDllMjVkOTExZDU1ZGYxYjQ2OGYyYWRiYWQ4YmM2ZTkwYTNlZTZjODJkNTcwNDQ2OWImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.hEpfcb-yAiudFICsgnmZQ8E4ZDCzR8arcJ5KLQECx7k)
Thus we have successfully developed a convolutional autoencoder for image denoising application.