Correctly, that is not that unhealthy, isn’t it? Wanna try an occasion to see the neural group in movement?
First, get the libraries and datasets ready.
import tensorflow as tf
from tensorflow import keras# import mnist dataset from keras datasets
(X_train, y_train), (X_test, y_test) = keras.datasets.mnist.load_data()
# now we've got 10 utterly totally different labels to classify,
# so convert the underside truth y from type (None, 1) to type (None, 10)
y_train = tf.keras.utils.to_categorical(y_train, 10)
y_test= tf.keras.utils.to_categorical(y_test, 10)
# assemble enter pipeline using tf.info
BATCH_SIZE = 64
train_dataset = tf.info.Dataset.from_tensor_slices((X_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size = 1024).batch(BATCH_SIZE)
val_dataset = tf.info.Dataset.from_tensor_slices((X_test, y_test))
val_dataset = val_dataset.batch(BATCH_SIZE)
Second, assemble a simple neural group model with only one hidden layer as we talked about.
model = keras.Sequential([
keras.layers.Reshape(target_shape = (28*28,), input_shape = (28, 28)),
keras.layers.Dense(units = 128, activation = 'relu'),
keras.layers.Dense(units = 10, activation = 'softmax')
])# compile
model.compile(optimizer = 'adam',
loss = tf.losses.CategoricalCrossentropy(from_logits = True),
metrics = ['accuracy'])
Lastly, let’s put together it with the dataset we equipped.
historic previous = model.match(train_dataset,
epochs = 10,
validation_data = val_dataset)
Proper right here is the output now we’ve got acquired, our model can perform pretty correctly on the validation dataset with 95% accuracy!
Epoch 1/20
938/938 [==============================] - 3s 3ms/step - loss: 2.9051 - accuracy: 0.8427 - val_loss: 0.5956 - val_accuracy: 0.8842
Epoch 2/20
938/938 [==============================] - 3s 3ms/step - loss: 0.4199 - accuracy: 0.9037 - val_loss: 0.4256 - val_accuracy: 0.9183
Epoch 3/20
938/938 [==============================] - 3s 3ms/step - loss: 0.2895 - accuracy: 0.9273 - val_loss: 0.3570 - val_accuracy: 0.9284
Epoch 4/20
938/938 [==============================] - 3s 3ms/step - loss: 0.2358 - accuracy: 0.9393 - val_loss: 0.3097 - val_accuracy: 0.9368
Epoch 5/20
938/938 [==============================] - 3s 3ms/step - loss: 0.2033 - accuracy: 0.9470 - val_loss: 0.2820 - val_accuracy: 0.9448
Epoch 6/20
938/938 [==============================] - 3s 3ms/step - loss: 0.1930 - accuracy: 0.9493 - val_loss: 0.2577 - val_accuracy: 0.9449
Epoch 7/20
938/938 [==============================] - 3s 3ms/step - loss: 0.1735 - accuracy: 0.9549 - val_loss: 0.2351 - val_accuracy: 0.9463
Epoch 8/20
938/938 [==============================] - 3s 3ms/step - loss: 0.1689 - accuracy: 0.9558 - val_loss: 0.3071 - val_accuracy: 0.9348
Epoch 9/20
938/938 [==============================] - 3s 3ms/step - loss: 0.1552 - accuracy: 0.9592 - val_loss: 0.2607 - val_accuracy: 0.9455
Epoch 10/20
938/938 [==============================] - 3s 3ms/step - loss: 0.1476 - accuracy: 0.9615 - val_loss: 0.2687 - val_accuracy: 0.9500
We’ll see after each epoch, the neural group has diminished the loss carry out value for a bit, and every the teaching accuracy and validation accuracy decreased slowly in the midst of the course of.
Correctly, that is it. Hope you will have liked finding out it! If that is the case, please give me a thumbs-up! Thanks!