In this tutorial, you’ll learn how to use layers to build a convolutional neural network (CNN) model to recognize the handwritten digits in the MNIST data set.

The MNIST dataset comprises 60,000 training examples and 10,000 test examples of the handwritten digits 0–9, formatted as 28×28-pixel monochrome images.

The tf.layers module provides a high-level API that makes it easy to construct a neural network. It provides methods that facilitate the creation of dense (fully connected) layers and convolutional layers, adding activation functions, and applying dropout regularization.

Setp1-

Create a folder in home directory e.g. MNIST & download the MNIST Dataset from below location to this directory-

train-images-idx3-ubyte.gz:  training set images (9912422 bytes)
train-labels-idx1-ubyte.gz:  training set labels (28881 bytes)
t10k-images-idx3-ubyte.gz:   test set images (1648877 bytes)
t10k-labels-idx1-ubyte.gz:   test set labels (4542 bytes)

Setp2-

Set up the skeleton for our TensorFlow program by creating a file called cnn_mnist.py under MNIST directory e.g. /home/<username>/MNIST

Setp3-

Link the MNIST folder inside the docker using docker run command

Step4-

Execute cnn_mnist.py to expect the training & test outcome as below -

INFO:tensorflow:loss = 2.36026, step = 1
INFO
:tensorflow:probabilities = [[ 0.07722801  0.08618255  0.09256398, ...]]
...
INFO
:tensorflow:loss = 2.13119, step = 101
INFO
:tensorflow:global_step/sec: 5.44132
...
INFO
:tensorflow:Loss for final step: 0.553216.

INFO
:tensorflow:Restored model from /tmp/mnist_convnet_model
INFO
:tensorflow:Eval steps [0,inf) for training step 20000.
INFO
:tensorflow:Input iterator is exhausted.
INFO
:tensorflow:Saving evaluation summary for step 20000: accuracy = 0.9733, loss = 0.0902271
{'loss': 0.090227105, 'global_step': 20000, 'accuracy': 0.97329998}


In the example above we've achieved an accuracy of 97.3% on our test data set.