TP - Machine Learning¶
1. The datasets¶
import numpy as np
import matplotlib.pyplot as plt
nb_sample = 127
X = np.load("CIFAR_3/X_cifar.npy")
y = np.load("CIFAR_3/y_cifar.npy")
X_gray = np.load("CIFAR_3/X_cifar_grayscale.npy")
print(np.shape(X))
(18000, 32, 32, 3)
for i in range(5):
plt.subplot(1,5,i+1)
plt.imshow(X_gray[nb_sample+i],cmap="gray")
plt.title(f"Classe {y[nb_sample+i]}")
plt.show()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_gray,y,test_size=0.20)
figure_height = 3
#Compter le nombre d'images par classe
def count_labels(y):
nb_labels = np.max(y) + 1
nb_data = np.zeros(nb_labels)
for i in range(nb_labels):
nb_data[i] = np.sum(y == i)
return nb_data
print(count_labels(y_train))
print(count_labels(y_test))
plt.figure(figsize=(7, figure_height))
# Création de l'histogramme pour l'ensemble d'apprentissage
plt.subplot(1, 2, 1)
plt.hist(y_train, bins=np.arange(np.max(y_train) + 2) - 0.5, edgecolor='black') # np.arange(np.max(y_train) + 2) - 0.5 permet de centrer les barres
plt.xlabel('Labels')
plt.ylabel('Number of data')
plt.title('Training set')
plt.margins(y=0.10)
for i in range(3):
plt.text(i, count_labels(y_train)[i], int(count_labels(y_train)[i]), ha='center', va='bottom')
# Création de l'histogramme pour l'ensemble de test
plt.subplot(1, 2, 2)
plt.hist(y_test, bins=np.arange(np.max(y_test) + 2) - 0.5, edgecolor='black') # np.arange(np.max(y_test) + 2) - 0.5 permet de centrer les barres
plt.xlabel('Labels')
plt.ylabel('Number of data')
plt.title('Test set')
plt.margins(y=0.10)
# ajout du nombre de données par classe
for i in range(3):
plt.text(i, count_labels(y_test)[i], int(count_labels(y_test)[i]), ha='center', va='bottom')
# Ajout d'un espace entre les sous-graphiques
plt.subplots_adjust(wspace=0.5)
# Ajout d'un espace entre les bars et le bord de l'histogramme
# Affichage des sous-graphiques
plt.show()
[4780. 4826. 4794.] [1220. 1174. 1206.]
- What are the shape of the data? What are the min and max value of the pixels? Is it better to normalize the data to work in [0,1]? Display samples from the dataset
Shape of the datasets : (18000, 32, 32, 3) = (nb_images, height, width, dimension(=RGB)) Yes it would be better to normalize data in [0,1]
- Use the sklearn method train_test_split to split the dataset in one train set and one test set. Why this split is important in Machine Learning?
This split is important to be able to avoid overfitting and being able to train and to test the model with enough data.
- Are the train and test sets well balanced (distribution of labels)? Why is it important for supervised Machine Learning?
Yes they are well balanced, there is approximatively the same amount of data for each label, for the train and test datasets.
2. Dimensionality reduction with the PCA¶
from sklearn.decomposition import PCA
k_list = [10, 50, 100, 200, 400]
for i,k in enumerate(k_list):
pca = PCA(n_components=k)
X_pca = pca.fit_transform(np.reshape(X_gray,(18000,32*32)))
X_reconstructed = pca.inverse_transform(X_pca)
plt.subplot(1,5,i+1)
plt.imshow(np.reshape(X_reconstructed[nb_sample],(32,32)),cmap="gray")
plt.title(f"n = {k}")
plt.show()
pca.explained_variance_ratio_
print("original shape: ", X_gray.shape)
print("transformed shape: ",X_pca.shape)
original shape: (18000, 32, 32) transformed shape: (18000, 400)
How to find the number of Components ?¶
According to jakevdp - In Depth: Principal Component Analysis
The main vector of analysis is the cumulative explained variance ratio. Cette valeur nous donne une estimation de à quelle point les valeurs influent et sont importantes. En regardant le graphique suivant, on voit qu'avec une valeur de 175, on aurait encore 95% de la variance cummulée.
pca = PCA().fit(X_gray.reshape(18000,32*32))
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.axhline(y=0.95, color='r', linestyle='-')
x = np.argmax(np.cumsum(pca.explained_variance_ratio_) >= 0.95) + 1
plt.axvline(x=x, color='g', linestyle='-')
plt.plot(x, 0.95, 'go')
plt.text(x, 0.90, f"x={x}", color='g')
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance')
plt.grid()
plt.show()
3. Supervised Machine Learning¶
3.1 Logistic Regression & Gaussian Naïve Bayes Classifier¶
Objectives
The main objective is to perform a supervised classification task using two classical methods:
- (multiclass) Logistic Regression.
- Gaussian Naïve Bayes Classifier. Apply these two methods on CIFAR-3-GRAY and on a compressed version of CIFAR-3-GRAY you obtained thanks to the CPA in the previous section.
Naïve Bayes: It is a probabilistic classifier based on Bayes' theorem. It assumes that the features are conditionally independent given the class label, which is why it is called "naïve."
Logistic Regression: Despite its name, logistic regression is a linear model for binary classification. It models the probability of an instance belonging to a particular class using the logistic function.
The major difference when predicting objects in images is that Logistic Regression can handle the complex relationships and dependencies between pixel values, making it more suitable for image data, while Naïve Bayes assumes independence between features, which may not align well with the correlated nature of pixel values in images
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import confusion_matrix
X_train, X_test, y_train, y_test = train_test_split(X_gray,y,test_size=0.20)
pca = PCA(n_components=175)
X_pca = pca.fit_transform(np.reshape(X_gray,(18000,32*32)))
X_train_pca, X_test_pca, y_train_pca, y_test_pca = train_test_split(X_pca,y,test_size=0.20)
X_train = np.reshape(X_train,(14400,32*32))/255
X_test = np.reshape(X_test,(3600,32*32))/255
[[ 1.58762159e+03 -1.73622412e+02 -1.02611810e+03 ... 2.74908240e+01 -1.16572750e+01 5.64343389e+01] [ 3.74222657e+02 7.03378005e+02 3.84613389e+02 ... 1.79905521e+00 4.87351859e+01 1.45136005e+01] [ 2.46102183e+02 -3.42782685e+02 1.19100516e+03 ... -1.26453399e+02 5.94226738e+00 1.33323942e+01] ... [ 2.23414417e+02 3.55166779e+02 -1.64073463e+02 ... -1.54238093e+00 6.37999657e+01 -1.20832633e+01] [-6.43372580e+02 1.58435719e+03 5.45726137e+01 ... 5.10727196e+01 1.76531131e+01 -1.39906526e+00] [-8.33988426e+01 3.49574002e+02 1.29740796e+03 ... -5.57484129e+00 3.90811681e+01 3.48926592e+01]]
clf = [LogisticRegression(C=10**(-i)) for i in range(11)]
for i in range(11):
clf[i].fit(X_train, y_train)
clf2 = [GaussianNB(var_smoothing=10**(-i)) for i in range(11)]
for i in range(11):
clf2[i].fit(X_train, y_train)
clf_pca = [LogisticRegression(C=10**(-i)) for i in range(11)]
for i in range(11):
clf_pca[i].fit(X_train_pca, y_train_pca)
clf2_pca = [GaussianNB(var_smoothing=10**(-i)) for i in range(11)]
for i in range(11):
clf2_pca[i].fit(X_train_pca, y_train_pca)
print(clf2_pca)
[GaussianNB(var_smoothing=1), GaussianNB(var_smoothing=0.1), GaussianNB(var_smoothing=0.01), GaussianNB(var_smoothing=0.001), GaussianNB(var_smoothing=0.0001), GaussianNB(var_smoothing=1e-05), GaussianNB(var_smoothing=1e-06), GaussianNB(var_smoothing=1e-07), GaussianNB(var_smoothing=1e-08), GaussianNB(), GaussianNB(var_smoothing=1e-10)]
for i in range(11):
lr_predict = clf[i].predict(X_test)
print(f"Logistic Regression's score with C = {10**(-i)}: {clf[i].score(X_test, y_test)}")
for i in range(11):
nb_predict = clf2[i].predict(X_test)
print(f"GaussianNB with var_smooth = {10**(-i)} : {clf2[i].score(X_test, y_test)}")
for i in range(11):
lr_predict_pca = clf_pca[i].predict(X_test_pca)
print(f"PCA Logistic Regression's score with C = {10**(-i)}: {clf_pca[i].score(X_test_pca, y_test_pca)}")
#nb_predict_pca = clf2_pca.predict(X_test_pca)
#print("PCA GaussianNB's score :", clf2_pca.score(X_test_pca, y_test_pca))
for i in range(11):
clf2_pca[i].predict(X_test_pca)
print(f"GaussianNB PCA with var_smooth = {10**(-i)} : {clf2_pca[i].score(X_test_pca, y_test_pca)}")
lr_confusion_matrix = confusion_matrix(y_test, lr_predict)
nb_confusion_matrix = confusion_matrix(y_test, nb_predict)
print(lr_confusion_matrix)
print(nb_confusion_matrix)
Logistic Regression's score with C = 1: 0.595 Logistic Regression's score with C = 0.1: 0.6019444444444444 Logistic Regression's score with C = 0.01: 0.6127777777777778 Logistic Regression's score with C = 0.001: 0.61 Logistic Regression's score with C = 0.0001: 0.5813888888888888 Logistic Regression's score with C = 1e-05: 0.5297222222222222 Logistic Regression's score with C = 1e-06: 0.47694444444444445 Logistic Regression's score with C = 1e-07: 0.39861111111111114 Logistic Regression's score with C = 1e-08: 0.32472222222222225 Logistic Regression's score with C = 1e-09: 0.32472222222222225 Logistic Regression's score with C = 1e-10: 0.32472222222222225 GaussianNB with var_smooth = 1 : 0.4313888888888889 GaussianNB with var_smooth = 0.1 : 0.5486111111111112 GaussianNB with var_smooth = 0.01 : 0.5797222222222222 GaussianNB with var_smooth = 0.001 : 0.5825 GaussianNB with var_smooth = 0.0001 : 0.5830555555555555 GaussianNB with var_smooth = 1e-05 : 0.5830555555555555 GaussianNB with var_smooth = 1e-06 : 0.5830555555555555 GaussianNB with var_smooth = 1e-07 : 0.5830555555555555 GaussianNB with var_smooth = 1e-08 : 0.5830555555555555 GaussianNB with var_smooth = 1e-09 : 0.5830555555555555 GaussianNB with var_smooth = 1e-10 : 0.5830555555555555 PCA Logistic Regression's score with C = 1: 0.6055555555555555 PCA Logistic Regression's score with C = 0.1: 0.6055555555555555 PCA Logistic Regression's score with C = 0.01: 0.6058333333333333 PCA Logistic Regression's score with C = 0.001: 0.6055555555555555 PCA Logistic Regression's score with C = 0.0001: 0.6055555555555555 PCA Logistic Regression's score with C = 1e-05: 0.6055555555555555 PCA Logistic Regression's score with C = 1e-06: 0.6052777777777778 PCA Logistic Regression's score with C = 1e-07: 0.6077777777777778 PCA Logistic Regression's score with C = 1e-08: 0.6 PCA Logistic Regression's score with C = 1e-09: 0.5669444444444445 PCA Logistic Regression's score with C = 1e-10: 0.5102777777777778 GaussianNB PCA with var_smooth = 1 : 0.37444444444444447 GaussianNB PCA with var_smooth = 0.1 : 0.36083333333333334 GaussianNB PCA with var_smooth = 0.01 : 0.40305555555555556 GaussianNB PCA with var_smooth = 0.001 : 0.5677777777777778 GaussianNB PCA with var_smooth = 0.0001 : 0.6119444444444444 GaussianNB PCA with var_smooth = 1e-05 : 0.6147222222222222 GaussianNB PCA with var_smooth = 1e-06 : 0.6158333333333333 GaussianNB PCA with var_smooth = 1e-07 : 0.6158333333333333 GaussianNB PCA with var_smooth = 1e-08 : 0.6158333333333333 GaussianNB PCA with var_smooth = 1e-09 : 0.6158333333333333 GaussianNB PCA with var_smooth = 1e-10 : 0.6158333333333333 [[1169 0 0] [1179 0 0] [1252 0 0]] [[748 300 121] [137 862 180] [271 492 489]]
lr_confusion_matrix = confusion_matrix(y_test, lr_predict)
nb_confusion_matrix = confusion_matrix(y_test, nb_predict)
print(lr_confusion_matrix)
print(nb_confusion_matrix)
3.2 Deep Learning / Multilayer Perceptron (MLP)¶
What is the size of the input tensor? What is the size of the output layer?
How many epochs do you use? What does it mean? What is the batch_size? What does it means?
Why do we need to define a validation set?
Pick the most important hyper-parameters you have to set to run the training process (e.g., optimizer…). Briefly explain why are they important (i.e. their influence).
Comment the training results.
Is there any overfitting? Why? If yes, what could be the causes? How to fix this issue? If you do not observe overfitting, how can you make your model overfit? Try and demonstrate the overfitting.
According to this first performance, change the architecture of the MLP (change parameters, add/remove layers…) as well as hyper-parameters, explain why, what are the influence on the results…?
X_train, X_test, y_train, y_test = train_test_split(X_gray,y,test_size=0.20)
X_train_colored, X_test_colored, y_train_colored, y_test_colored = train_test_split(X,y,test_size=0.20)
X_train = np.reshape(X_train,(14400,32*32))/255
X_test = np.reshape(X_test,(3600,32*32))/255
import tensorflow as tf
WARNING:tensorflow:From c:\Users\nonob\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(123, activation='relu', input_shape=(1024,)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Dense(3, activation='softmax'),
])
print(model.summary())
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_4 (Dense) (None, 123) 126075
dense_5 (Dense) (None, 3) 372
=================================================================
Total params: 126447 (493.93 KB)
Trainable params: 126447 (493.93 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
None
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
modele_hist = model.fit(X_train, y_train, epochs=30, batch_size=32, verbose=1, validation_data=(X_test, y_test))
Epoch 1/30 450/450 [==============================] - 2s 3ms/step - loss: 0.8872 - accuracy: 0.5892 - val_loss: 0.8209 - val_accuracy: 0.6322 Epoch 2/30 450/450 [==============================] - 1s 3ms/step - loss: 0.7752 - accuracy: 0.6603 - val_loss: 0.7644 - val_accuracy: 0.6778 Epoch 3/30 450/450 [==============================] - 1s 3ms/step - loss: 0.7462 - accuracy: 0.6812 - val_loss: 0.7570 - val_accuracy: 0.6733 Epoch 4/30 450/450 [==============================] - 1s 3ms/step - loss: 0.7215 - accuracy: 0.6927 - val_loss: 0.7319 - val_accuracy: 0.6847 Epoch 5/30 450/450 [==============================] - 1s 2ms/step - loss: 0.6983 - accuracy: 0.7074 - val_loss: 0.7140 - val_accuracy: 0.6939 Epoch 6/30 450/450 [==============================] - 1s 3ms/step - loss: 0.6776 - accuracy: 0.7166 - val_loss: 0.6996 - val_accuracy: 0.7050 Epoch 7/30 450/450 [==============================] - 1s 3ms/step - loss: 0.6632 - accuracy: 0.7269 - val_loss: 0.6904 - val_accuracy: 0.7128 Epoch 8/30 450/450 [==============================] - 1s 3ms/step - loss: 0.6517 - accuracy: 0.7306 - val_loss: 0.6700 - val_accuracy: 0.7200 Epoch 9/30 450/450 [==============================] - 1s 3ms/step - loss: 0.6337 - accuracy: 0.7378 - val_loss: 0.6635 - val_accuracy: 0.7258 Epoch 10/30 450/450 [==============================] - 1s 2ms/step - loss: 0.6168 - accuracy: 0.7460 - val_loss: 0.6818 - val_accuracy: 0.7064 Epoch 11/30 450/450 [==============================] - 1s 3ms/step - loss: 0.6084 - accuracy: 0.7485 - val_loss: 0.6525 - val_accuracy: 0.7206 Epoch 12/30 450/450 [==============================] - 1s 3ms/step - loss: 0.5913 - accuracy: 0.7558 - val_loss: 0.6670 - val_accuracy: 0.7247 Epoch 13/30 450/450 [==============================] - 1s 2ms/step - loss: 0.5851 - accuracy: 0.7604 - val_loss: 0.6440 - val_accuracy: 0.7286 Epoch 14/30 450/450 [==============================] - 1s 2ms/step - loss: 0.5747 - accuracy: 0.7585 - val_loss: 0.6509 - val_accuracy: 0.7278 Epoch 15/30 450/450 [==============================] - 1s 2ms/step - loss: 0.5634 - accuracy: 0.7674 - val_loss: 0.6472 - val_accuracy: 0.7336 Epoch 16/30 450/450 [==============================] - 1s 3ms/step - loss: 0.5584 - accuracy: 0.7686 - val_loss: 0.6321 - val_accuracy: 0.7361 Epoch 17/30 450/450 [==============================] - 1s 2ms/step - loss: 0.5435 - accuracy: 0.7771 - val_loss: 0.6297 - val_accuracy: 0.7414 Epoch 18/30 450/450 [==============================] - 1s 3ms/step - loss: 0.5390 - accuracy: 0.7774 - val_loss: 0.6249 - val_accuracy: 0.7383 Epoch 19/30 450/450 [==============================] - 1s 2ms/step - loss: 0.5295 - accuracy: 0.7829 - val_loss: 0.6409 - val_accuracy: 0.7325 Epoch 20/30 450/450 [==============================] - 1s 2ms/step - loss: 0.5307 - accuracy: 0.7810 - val_loss: 0.6929 - val_accuracy: 0.7247 Epoch 21/30 450/450 [==============================] - 1s 3ms/step - loss: 0.5198 - accuracy: 0.7878 - val_loss: 0.6549 - val_accuracy: 0.7381 Epoch 22/30 450/450 [==============================] - 1s 3ms/step - loss: 0.5158 - accuracy: 0.7876 - val_loss: 0.6441 - val_accuracy: 0.7361 Epoch 23/30 450/450 [==============================] - 1s 3ms/step - loss: 0.5011 - accuracy: 0.7973 - val_loss: 0.6675 - val_accuracy: 0.7242 Epoch 24/30 450/450 [==============================] - 1s 2ms/step - loss: 0.4997 - accuracy: 0.7955 - val_loss: 0.6660 - val_accuracy: 0.7208 Epoch 25/30 450/450 [==============================] - 1s 3ms/step - loss: 0.4956 - accuracy: 0.7978 - val_loss: 0.6907 - val_accuracy: 0.7203 Epoch 26/30 450/450 [==============================] - 1s 3ms/step - loss: 0.4882 - accuracy: 0.7981 - val_loss: 0.6974 - val_accuracy: 0.7194 Epoch 27/30 450/450 [==============================] - 1s 3ms/step - loss: 0.4758 - accuracy: 0.8048 - val_loss: 0.6360 - val_accuracy: 0.7464 Epoch 28/30 450/450 [==============================] - 1s 3ms/step - loss: 0.4755 - accuracy: 0.8069 - val_loss: 0.6567 - val_accuracy: 0.7361 Epoch 29/30 450/450 [==============================] - 1s 3ms/step - loss: 0.4677 - accuracy: 0.8078 - val_loss: 0.6497 - val_accuracy: 0.7369 Epoch 30/30 450/450 [==============================] - 2s 4ms/step - loss: 0.4757 - accuracy: 0.8046 - val_loss: 0.6387 - val_accuracy: 0.7517
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.plot(modele_hist.history['accuracy'], label='Training Accuracy')
plt.plot(modele_hist.history['val_accuracy'], label='Validation Accuracy')
plt.title('Model Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.grid()
# Plotting loss
plt.subplot(1, 2, 2)
plt.plot(modele_hist.history['loss'], label='Training Loss')
plt.plot(modele_hist.history['val_loss'], label='Validation Loss')
plt.title('Model Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.grid()
plt.tight_layout() # Adjust layout for better spacing
plt.show()
3.3. Deep Learning / CONVOLUTIONNAL NEURAL NETWORK (CNN)¶
Same as before: but now we use the color images and process a Convolutional Neural Network (CNN).
model_CNN = tf.keras.models.Sequential()
model_CNN.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model_CNN.add(tf.keras.layers.MaxPooling2D((2, 2)))
model_CNN.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu'))
model_CNN.add(tf.keras.layers.MaxPooling2D((2, 2)))
model_CNN.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu'))
model_CNN.add(tf.keras.layers.Flatten())
model_CNN.add(tf.keras.layers.Dense(64, activation='relu'))
model_CNN.add(tf.keras.layers.Dense(3))
model_CNN.summary()
Model: "sequential_6"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_3 (Conv2D) (None, 30, 30, 32) 896
max_pooling2d_2 (MaxPoolin (None, 15, 15, 32) 0
g2D)
conv2d_4 (Conv2D) (None, 13, 13, 64) 18496
max_pooling2d_3 (MaxPoolin (None, 6, 6, 64) 0
g2D)
conv2d_5 (Conv2D) (None, 4, 4, 64) 36928
flatten_1 (Flatten) (None, 1024) 0
dense_18 (Dense) (None, 64) 65600
dense_19 (Dense) (None, 3) 195
=================================================================
Total params: 122115 (477.01 KB)
Trainable params: 122115 (477.01 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
model_CNN.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model_CNN.fit(X_train_colored, y_train, epochs=10,
validation_data=(X_test_colored, y_test))
Epoch 1/10 450/450 [==============================] - 9s 16ms/step - loss: 0.5739 - accuracy: 0.7493 - val_loss: 0.4261 - val_accuracy: 0.8189 Epoch 2/10 450/450 [==============================] - 8s 18ms/step - loss: 0.3693 - accuracy: 0.8489 - val_loss: 0.3541 - val_accuracy: 0.8556 Epoch 3/10 450/450 [==============================] - 8s 17ms/step - loss: 0.2990 - accuracy: 0.8784 - val_loss: 0.3186 - val_accuracy: 0.8733 Epoch 4/10 450/450 [==============================] - 8s 18ms/step - loss: 0.2658 - accuracy: 0.8949 - val_loss: 0.3169 - val_accuracy: 0.8761 Epoch 5/10 450/450 [==============================] - 7s 16ms/step - loss: 0.2291 - accuracy: 0.9101 - val_loss: 0.3712 - val_accuracy: 0.8578 Epoch 6/10 450/450 [==============================] - 7s 15ms/step - loss: 0.2012 - accuracy: 0.9218 - val_loss: 0.3093 - val_accuracy: 0.8844 Epoch 7/10 450/450 [==============================] - 8s 17ms/step - loss: 0.1817 - accuracy: 0.9306 - val_loss: 0.2979 - val_accuracy: 0.8850 Epoch 8/10 450/450 [==============================] - 9s 19ms/step - loss: 0.1602 - accuracy: 0.9377 - val_loss: 0.2852 - val_accuracy: 0.8956 Epoch 9/10 450/450 [==============================] - 9s 19ms/step - loss: 0.1322 - accuracy: 0.9499 - val_loss: 0.3194 - val_accuracy: 0.8794 Epoch 10/10 450/450 [==============================] - 9s 20ms/step - loss: 0.1232 - accuracy: 0.9527 - val_loss: 0.2997 - val_accuracy: 0.8947
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Model Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.grid()
# Plotting loss
plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Model Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.grid()
plt.tight_layout() # Adjust layout for better spacing
plt.show()