Properly, that’s not that unhealthy, isn’t it? Wanna check out an instance to see the neural community in motion?
First, get the libraries and datasets prepared.
import tensorflow as tf
from tensorflow import keras# import mnist dataset from keras datasets
(X_train, y_train), (X_test, y_test) = keras.datasets.mnist.load_data()
# now we have 10 completely different labels to categorise,
# so convert the bottom fact y from form (None, 1) to form (None, 10)
y_train = tf.keras.utils.to_categorical(y_train, 10)
y_test= tf.keras.utils.to_categorical(y_test, 10)
# construct enter pipeline utilizing tf.information
BATCH_SIZE = 64
train_dataset = tf.information.Dataset.from_tensor_slices((X_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size = 1024).batch(BATCH_SIZE)
val_dataset = tf.information.Dataset.from_tensor_slices((X_test, y_test))
val_dataset = val_dataset.batch(BATCH_SIZE)
Second, construct a easy neural community mannequin with just one hidden layer as we talked about.
mannequin = keras.Sequential([
keras.layers.Reshape(target_shape = (28*28,), input_shape = (28, 28)),
keras.layers.Dense(units = 128, activation = 'relu'),
keras.layers.Dense(units = 10, activation = 'softmax')
])# compile
mannequin.compile(optimizer = 'adam',
loss = tf.losses.CategoricalCrossentropy(from_logits = True),
metrics = ['accuracy'])
Lastly, let’s prepare it with the dataset we supplied.
historical past = mannequin.match(train_dataset,
epochs = 10,
validation_data = val_dataset)
Right here is the output now we have received, our mannequin can carry out fairly properly on the validation dataset with 95% accuracy!
Epoch 1/20
938/938 [==============================] - 3s 3ms/step - loss: 2.9051 - accuracy: 0.8427 - val_loss: 0.5956 - val_accuracy: 0.8842
Epoch 2/20
938/938 [==============================] - 3s 3ms/step - loss: 0.4199 - accuracy: 0.9037 - val_loss: 0.4256 - val_accuracy: 0.9183
Epoch 3/20
938/938 [==============================] - 3s 3ms/step - loss: 0.2895 - accuracy: 0.9273 - val_loss: 0.3570 - val_accuracy: 0.9284
Epoch 4/20
938/938 [==============================] - 3s 3ms/step - loss: 0.2358 - accuracy: 0.9393 - val_loss: 0.3097 - val_accuracy: 0.9368
Epoch 5/20
938/938 [==============================] - 3s 3ms/step - loss: 0.2033 - accuracy: 0.9470 - val_loss: 0.2820 - val_accuracy: 0.9448
Epoch 6/20
938/938 [==============================] - 3s 3ms/step - loss: 0.1930 - accuracy: 0.9493 - val_loss: 0.2577 - val_accuracy: 0.9449
Epoch 7/20
938/938 [==============================] - 3s 3ms/step - loss: 0.1735 - accuracy: 0.9549 - val_loss: 0.2351 - val_accuracy: 0.9463
Epoch 8/20
938/938 [==============================] - 3s 3ms/step - loss: 0.1689 - accuracy: 0.9558 - val_loss: 0.3071 - val_accuracy: 0.9348
Epoch 9/20
938/938 [==============================] - 3s 3ms/step - loss: 0.1552 - accuracy: 0.9592 - val_loss: 0.2607 - val_accuracy: 0.9455
Epoch 10/20
938/938 [==============================] - 3s 3ms/step - loss: 0.1476 - accuracy: 0.9615 - val_loss: 0.2687 - val_accuracy: 0.9500
We will see after every epoch, the neural community has diminished the loss perform worth for a bit, and each the coaching accuracy and validation accuracy decreased slowly in the course of the course of.
Properly, that’s it. Hope you may have loved studying it! If that’s the case, please give me a thumbs-up! Thanks!