This tutorial is about deep learning and important algorithms in Keras. We’ll cover the areas of application of the algorithm and its implementation in Keras. Deep learning is a subset of machine learning that uses algorithms inspired by the human brain. There have been several important developments in this area in the last decade. The Keras library is the result of one of them. It allows you to create neural network models in a few lines of code. There was a boom in research on deep learning algorithms at one time, and Keras makes it easy for users to create them. But before we move on to deep learning, let’s do a Keras installation.
Table of Contents
Popular Deep Learning Algorithms with Keras
Here are the most popular deep learning algorithms:
- Autoencoders
- Convergent Neural Networks
- Recurrent Neural Networks
- Long Term Memory
- Deep Boltzmann Machine
- Deep trust networks
In this piece, let’s take a look at deep learning autoencoders
Autoencoders
These types of neural networks are capable of compressing incoming data and recreating it anew. These are very old deep learning algorithms. They encode input to the bottleneck level (“bottle-neck”) and then decode to produce raw data. In this way, a compressed form of input is formed at the bottle-neck level. Anomaly detection and image noise elimination are the main applications of autoencoders.
Types of autoencoders
There are 7 types of deep learning autoencoders:
- Noise Suppressing Autocoders
- Deep autoencoders
- Sparse autoencoders
- Compressive autoencoders
- Converging autocoders
- Variationary autocoders
- Incomplete autocoders
A noise-suppressing autoencoder will be created as an example
Implementation of the noise-suppressing autoencoder in Keras
In order to implement it on Keras, we will work with the MNIST database of numbers. First, we will add noise to the images. Then we will create an autoencoder to remove noise and recreate the original images.
- Import the required modulesimport
numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist
from keras.layers import Input,Dense,Conv2D,MaxPooling2D,UpSampling2D
from keras.models import Model
from keras import backend as K
- Load MNIST images from the datasetesfrom
keras.datasets import mnist
(x_train,y_train),(x_test,y_test)=mnist.load_data()
- Convert the set to a range of 0 to
1x_train=x_train.astype('float32')/255
x_test=x_test.astype('float32')/255
x_train=np.reshape(x_train,(len(x_train),28,28,1))
x_test=np.reshape(x_test,(len(x_test),28,28,1))
- Add noise to the image with Gaussian distribution
x_train_noisy=x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0,size=x_train.shape)
x_test_noisy=x_test + noise_factor * np.random.normal(loc=0.0, scale=1.0,size=x_test.shape)
x_train_noisy= np.clip(x_train_noisy,0.,1.)
x_test_noisy= np.clip(x_test_noisy,0.,1.)
- Visualize the added noise=5
plt.figure(figsize=(20,2))
for i in range(n):
ax=plt.subplot(1,n,i+1)
plt.imshow(x_test_noisy[i].reshape(28,28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
- Define the input layer and create the model iprut_img=Input
(shape=(28,28,1))
x=Conv2D(128,(7,7),activation='relu',padding='same')(input_img)
x=MaxPooling2D((2,2),padding='same')(x)
x = Conv2D(32, (2, 2),activation='relu',padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
encoded = Conv2D(1, (7, 7), activation='relu', padding='same')(x)
- the “bottle neck” is encoded and consists of compressed images
x = Conv2D(32, (7, 7), activation='relu', padding='same')(input_encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(128, (2, 2), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (7, 7), activation='sigmoid', padding='same')(x)
- Training autocoder
= Model(input_img, encoded, name="encoder")
In this example the model was trained for 20
decoder = Model(input_encoded, decoded, name="decoder")
autoencoder = Model(input_img, decoder(encoder(input_img)), name="autoencoder")
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.summary()
autoencoder.fit(x_train, x_train,
epochs=20,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))
epochs
. A better result can be achieved by increasing this value to 100. - Getting predictions from noisy datax_test_result
= autoencoder.predict(x_test_noisy, batch_size=128)
- Re-visit reconstructed images
n=5
As you can see, the autoencoder is able to recreate images and get rid of noise. You can get better results by increasing the number of
plt.figure(figsize=(20,2))
for i in range(n):
ax=plt.subplot(1,n,i+1)
plt.imshow(x_test_result[i].reshape(28,28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
epochs
.
Output
This material is an example of a deep learning implementation on Keras. Now you know what autoencoders are, what they are, why they are needed and exactly how they work. In the example we have deconstructed a neural network to remove noise from data.