VGG网络是最流行的图像识别技术之一的基础。学习它是值得的,因为它打开了许多可能性。要理解VGGNet,你需要了解卷积神经网络(CNN)。
在本文中,我们将仅关注VGGNet的实现部分。因此,我们将在这里迅速进行。
关于VGG网络
VGGNet是一种能够更成功地提取特征的卷积神经网络(CNN)。在VGGNet中,我们堆叠多个卷积层。VGGNet可以是浅层或深层。
在浅层VGGNet中,通常只添加两组四个卷积层,我们将很快看到。而在深层VGGNet中,可以添加超过四个卷积层。两种常用的深层VGGNet是使用16层的VGG16和使用19层的VGG19。我们可以添加批量归一化层或避免添加。但是在本教程中,我将使用它。
你可以在此链接中了解更多关于该架构的信息:
https://viso.ai/deep-learning/vgg-very-deep-convolutional-networks
今天我们将致力于迷你VGGNet。因此,它将更简单,更容易运行,但对于许多用例仍然很强大。
迷你VGGNet的一个重要特征是,它使用所有的3×3过滤器。这就是它能够很好地概括的原因。让我们开始构建一个在Keras和TensorFlow中的迷你VGGNet。
我在这里使用了Google Colaboratory笔记本并启用了GPU。否则,训练速度会很慢。
迷你VGG网络的开发、训练和评估
是时候开始工作了。我们将通过一些实验来演示如何使用它。
这些是必要的导入:
import tensorflow as tf
from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.core import Dropout
from keras.layers.core import Dense
from keras import backend as K
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import classification_report
from keras.optimizers import SGD
from keras.datasets import cifar10
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
这是许多导入!
我们将使用TensorFlow中的cifar-10数据集,这是TensorFlow库中的一个公共数据集。
我尝试了两个不同的网络,只是作为一个实验。第一个是比较流行的网络。我说它很受欢迎,是因为我在Kaggle和其他一些教程中找到了这个架构。
class MiniVGGNet:
@staticmethod
def build(width, height, depth, classes):
# initialize the model along with the input shape to be
# "channels last" and the channels dimension itself
model = Sequential()
inputShape = (height, width, depth)
chanDim = -1
if K.image_data_format() == "channels_first":
inputShape = (depth, height, width)
chanDim = 1
# first CONV => Activation => CONV => Activation => POOL layer set
model.add(Conv2D(32, (3, 3), padding="same",
input_shape=inputShape))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(32, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
# second CONV => Activation => CONV => Activation => POOL layer set
model.add(Conv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
# Dense Layer
model.add(Flatten())
model.add(Dense(512))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
# softmax classifier
model.add(Dense(classes))
model.add(Activation("softmax"))
# return the constructed network architecture
return model
让我们加载并准备我们的cifar-10数据集。
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train = x_train.astype("float") / 255.0
x_test = x_test.astype("float") / 255.0
cifar-10数据集有10个标签。这是cifar-10数据集中的标签:
labelNames = ["airplane", "automobile", "bird", "cat", "deer",
"dog", "frog", "horse", "ship", "truck"]
使用LabelBinarizer对标签进行二值化:
lb = LabelBinarizer()
y_train = lb.fit_transform(y_train)
y_test = lb.transform(y_test)
在此编译模型。评估指标是“准确性”,我们将运行10个时期。
optimizer = tf.keras.optimizers.legacy.SGD(learning_rate=0.01, decay=0.01/40, momentum=0.9,
nesterov=True)
model = miniVGGNet.build(width = 32, height = 32, depth = 3, classes=10)
model.compile(loss='categorical_crossentropy', optimizer = optimizer,
metrics=['accuracy'])
h = model.fit(x_train, y_train, validation_data=(x_test, y_test),
batch_size = 64, epochs=10, verbose=1)
这是结果:
Epoch 1/10
782/782 [==============================] - 424s 539ms/step - loss: 1.6196 - accuracy: 0.4592 - val_loss: 1.4083 - val_accuracy: 0.5159
Epoch 2/10
782/782 [==============================] - 430s 550ms/step - loss: 1.1437 - accuracy: 0.6039 - val_loss: 1.0213 - val_accuracy: 0.6505
Epoch 3/10
782/782 [==============================] - 430s 550ms/step - loss: 0.9634 - accuracy: 0.6618 - val_loss: 0.8495 - val_accuracy: 0.7013
Epoch 4/10
782/782 [==============================] - 427s 546ms/step - loss: 0.8532 - accuracy: 0.6998 - val_loss: 0.7881 - val_accuracy: 0.7215
Epoch 5/10
782/782 [==============================] - 425s 543ms/step - loss: 0.7773 - accuracy: 0.7280 - val_loss: 0.8064 - val_accuracy: 0.7228
Epoch 6/10
782/782 [==============================] - 421s 538ms/step - loss: 0.7240 - accuracy: 0.7451 - val_loss: 0.6757 - val_accuracy: 0.7619
Epoch 7/10
782/782 [==============================] - 420s 537ms/step - loss: 0.6843 - accuracy: 0.7579 - val_loss: 0.6564 - val_accuracy: 0.7715
Epoch 8/10
782/782 [==============================] - 420s 537ms/step - loss: 0.6405 - accuracy: 0.7743 - val_loss: 0.6833 - val_accuracy: 0.7706
Epoch 9/10
782/782 [==============================] - 422s 540ms/step - loss: 0.6114 - accuracy: 0.7828 - val_loss: 0.6188 - val_accuracy: 0.7848
Epoch 10/10
782/782 [==============================] - 421s 538ms/step - loss: 0.5799 - accuracy: 0.7946 - val_loss: 0.6166 - val_accuracy: 0.7898
经过10个时期,准确性在训练数据上为79.46%,在验证数据上为78.98%。
谨记这一点,我想在这个网络中改变一些东西并看到结果。让我们重新定义上面的网络。我在整个网络中使用了64个过滤器,密集层中有256个神经元,最后一个dropout层中有40%的丢失率。
这是新的迷你VGG网络:
class miniVGGNet:
@staticmethod
def build(width, height, depth, classes):
model = Sequential()
inputShape = (height, width, depth)
chanDim = -1
if K.image_data_format() == "channels_first":
inputShape = (depth, height, width)
chanDim = 1
# first Conv => Activation => Conv => Activation => Pool layer set
model.add(Conv2D(64, (3, 3), padding="same",
input_shape=inputShape))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
# second Conv => Activation => Conv => Activation => Pool layer set
model.add(Conv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
# Dense Layer
model.add(Flatten())
model.add(Dense(300))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Dense(classes))
model.add(Activation("softmax"))
return model
我们将使用相同的优化参数运行该模型。但是,这里我使用了20个时期。
optimizer = tf.keras.optimizers.legacy.SGD(learning_rate=0.01, decay=0.01/40, momentum=0.9,
nesterov=True)
model = miniVGGNet.build(width = 32, height = 32, depth = 3, classes=10)
model.compile(loss='categorical_crossentropy', optimizer = optimizer,
metrics=['accuracy'])
h = model.fit(x_train, y_train, validation_data=(x_test, y_test),
batch_size = 64, epochs=20, verbose=1)
这是结果:
Epoch 1/20
782/782 [==============================] - 22s 18ms/step - loss: 1.5210 - accuracy: 0.4697 - val_loss: 1.1626 - val_accuracy: 0.5854
Epoch 2/20
782/782 [==============================] - 14s 18ms/step - loss: 1.0706 - accuracy: 0.6219 - val_loss: 0.9913 - val_accuracy: 0.6586
Epoch 3/20
782/782 [==============================] - 14s 18ms/step - loss: 0.8947 - accuracy: 0.6826 - val_loss: 0.8697 - val_accuracy: 0.6941
Epoch 4/20
782/782 [==============================] - 14s 18ms/step - loss: 0.7926 - accuracy: 0.7208 - val_loss: 0.7649 - val_accuracy: 0.7294
Epoch 5/20
782/782 [==============================] - 14s 18ms/step - loss: 0.7192 - accuracy: 0.7470 - val_loss: 0.6937 - val_accuracy: 0.7593
Epoch 6/20
782/782 [==============================] - 13s 17ms/step - loss: 0.6641 - accuracy: 0.7640 - val_loss: 0.6899 - val_accuracy: 0.7639
Epoch 7/20
782/782 [==============================] - 13s 17ms/step - loss: 0.6141 - accuracy: 0.7805 - val_loss: 0.6589 - val_accuracy: 0.7742
Epoch 8/20
782/782 [==============================] - 13s 17ms/step - loss: 0.5774 - accuracy: 0.7960 - val_loss: 0.6565 - val_accuracy: 0.7734
Epoch 9/20
782/782 [==============================] - 14s 17ms/step - loss: 0.5430 - accuracy: 0.8077 - val_loss: 0.6092 - val_accuracy: 0.7921
Epoch 10/20
782/782 [==============================] - 14s 18ms/step - loss: 0.5145 - accuracy: 0.8177 - val_loss: 0.5904 - val_accuracy: 0.7944
Epoch 11/20
782/782 [==============================] - 13s 17ms/step - loss: 0.4922 - accuracy: 0.8256 - val_loss: 0.6041 - val_accuracy: 0.7975
Epoch 12/20
782/782 [==============================] - 14s 18ms/step - loss: 0.4614 - accuracy: 0.8381 - val_loss: 0.5889 - val_accuracy: 0.7981
Epoch 13/20
782/782 [==============================] - 14s 18ms/step - loss: 0.4358 - accuracy: 0.8457 - val_loss: 0.5590 - val_accuracy: 0.8120
Epoch 14/20
782/782 [==============================] - 13s 17ms/step - loss: 0.4186 - accuracy: 0.8508 - val_loss: 0.5555 - val_accuracy: 0.8092
Epoch 15/20
782/782 [==============================] - 13s 17ms/step - loss: 0.4019 - accuracy: 0.8582 - val_loss: 0.5739 - val_accuracy: 0.8108
Epoch 16/20
782/782 [==============================] - 14s 17ms/step - loss: 0.3804 - accuracy: 0.8658 - val_loss: 0.5577 - val_accuracy: 0.8136
Epoch 17/20
782/782 [==============================] - 13s 17ms/step - loss: 0.3687 - accuracy: 0.8672 - val_loss: 0.5544 - val_accuracy: 0.8170
Epoch 18/20
782/782 [==============================] - 13s 17ms/step - loss: 0.3541 - accuracy: 0.8744 - val_loss: 0.5435 - val_accuracy: 0.8199
Epoch 19/20
782/782 [==============================] - 13s 17ms/step - loss: 0.3438 - accuracy: 0.8758 - val_loss: 0.5533 - val_accuracy: 0.8167
Epoch 20/20
782/782 [==============================] - 13s 17ms/step - loss: 0.3292 - accuracy: 0.8845 - val_loss: 0.5491 - val_accuracy: 0.8199
如果你注意到,在10个时期后,准确性略高于先前的网络,而在20个时期后,准确性非常好,分别为训练数据的88.45%和验证数据的81.99%。
将训练和验证准确性以及训练和验证损失在同一图中呈现:
%matplotlib inline
plt.close('all')
plt.style.use("ggplot")
plt.figure(figsize=(8, 6))
plt.plot(np.arange(0, 20), h.history["loss"], label="train_loss")
plt.plot(np.arange(0, 20), h.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, 20), h.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, 20), h.history["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("No of Epochs")
plt.ylabel("Loss/Accuracy")
plt.legend()
plt.show()
作者通过图表展示了训练损失平滑下降,验证损失也有些波动。
结论
请随意进行实验。尝试根据项目使用不同的参数,看看它对你的项目的效果如何。我们将在以后深入研究深度网络。
作者:磐怼怼
来源:深度学习与计算机视觉
链接:https://mp.weixin.qq.com/s/zmA1FVOmR6cNq4Rtacw_8Q
版权声明:本文内容转自互联网,本文观点仅代表作者本人。本站仅提供信息存储空间服务,所有权归原作者所有。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至1393616908@qq.com 举报,一经查实,本站将立刻删除。