1. 花卉图像分类–TensorFlow

TensorFlow是一个基于数据流编程(dataflow programming)的符号数学系统, 被广泛应用于各类机器学习(machine learning)算法的编程实现,其前身是谷歌的神经网络算法库DistBelief。

本章将简单介绍TensorFlow的一个花卉图像分类任务,使用使用tf.keras.Sequential模型,简单构建模型,最后转换成RKNN模型部署到鲁班猫RK系列板卡上。

提示

测试环境:鲁班猫板卡使用Debian10,PC端是ubuntu20.04,TensorFlow版本2.6.2,rknn-Toolkit2版本1.4.0。

1.1. 安装tensorflow环境

在PC使用Ubuntu或者Windows系统安装tensorflow,下面示例是在ubuntu20.04系统中:

# 安装python,更新pip,需要使用Python 3.6-3.9 和 pip 19.0及更高版本
sudo apt update
sudo apt install python3-dev python3-pip python3-venv
sudo python3 -m pip install pip --upgrade

# 一般都创建一个虚拟环境使用
sudo python3 -m venv  .tensorflow_venv

# 直接命令安装最新稳定版tensorfolw,也可以指定下版本(GPU和CPU合并)
pip3 install tensorflow

详细安装步骤和要求可以参考下 TensorFlow教程

提示

在Windows系统(Windows64位操作系统)中,需要安装Python,VC++, Anaconda 等。如需支持NVIDIA GPU, 需要安装GPU驱动程序,CUDA,cuDNN,关于他们之间的版本对应参考下 这里 , 显卡驱动与CUDA版本之间对应参考下 这里 ,网上有很多安装教程,根据自己设备,搜索较新日期文档即可。

1.2. 图像分类

下面将介绍一个简单示例,使用tf.keras.Sequential模型对花卉图像进行分类。 详情参考下 这里

1.2.1. 准备数据集

下载数据集,包括3700张图像,使用 tf.keras.utils.image_dataset_from_directory 分80%用于测试集,剩下20%用于验证测试集。

tensorflow_classification.py(参考配套例程)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# 获取数据
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)

# 设置参数
batch_size = 32
img_height = 180
img_width = 180

# 使用tf.keras.utils导入划分数据集,分为训练集和验证集
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)

val_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)

class_names = train_ds.class_names
#print(class_names)

# 对数据进行处理
normalization_layer = layers.Rescaling(1./255)
train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
val_ds = val_ds.map(lambda x, y: (normalization_layer(x), y))
num_classes = len(class_names)

提示

Tensorflow的一些API接口说明和使用参考下https://tensorflow.google.cn/api_docs/python/tf

1.2.2. 构建和训练模型

Keras Sequential模型构建:主要包含三个卷积块(tf.Keras.layers.Conv2D)组成,每个卷积块中都有一个最大池化层(tf.karas.layers.MaxPooling2D)。 最后有一个全连接层(tf.keras.layers.Density),都以relu作为激活函数。

tensorflow_classification.py(参考配套例程)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# 构建网络结构
model = Sequential([
    layers.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
    layers.Conv2D(16, 3, padding='same', activation='relu'),
    layers.MaxPooling2D(),
    layers.Conv2D(32, 3, padding='same', activation='relu'),
    layers.MaxPooling2D(),
    layers.Conv2D(64, 3, padding='same', activation='relu'),
    layers.MaxPooling2D(),
    layers.Flatten(),
    layers.Dense(128, activation='relu'),
    layers.Dense(num_classes)
])

# 配置优化器 ‘adam’,损失函数,评价指标
model.compile(optimizer='adam',
            loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
            metrics=['accuracy'])

# 查看各层网络结构
model.summary()

# 训练模型
epochs=10
model.fit(
    train_ds,
    validation_data=val_ds,
    epochs=epochs,
)
查看各层网络结构
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
rescaling (Rescaling)        (None, 180, 180, 3)       0
_________________________________________________________________
conv2d (Conv2D)              (None, 180, 180, 16)      448
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 90, 90, 16)        0
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 90, 90, 32)        4640
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 45, 45, 32)        0
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 45, 45, 64)        18496
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 22, 22, 64)        0
_________________________________________________________________
flatten (Flatten)            (None, 30976)             0
_________________________________________________________________
dense (Dense)                (None, 128)               3965056
_________________________________________________________________
dense_1 (Dense)              (None, 5)                 645
=================================================================
Total params: 3,989,285
Trainable params: 3,989,285
Non-trainable params: 0
_________________________________________________________________
92/92 [==============================] - 65s 644ms/step - loss: 1.4799 - accuracy: 0.3689 - val_loss: 1.1546 - val_accuracy: 0.4877
Epoch 2/10
92/92 [==============================] - 52s 560ms/step - loss: 1.0406 - accuracy: 0.5777 - val_loss: 1.0439 - val_accuracy: 0.5913
Epoch 3/10
92/92 [==============================] - 51s 555ms/step - loss: 0.8372 - accuracy: 0.6764 - val_loss: 1.0274 - val_accuracy: 0.6022
Epoch 4/10
92/92 [==============================] - 51s 552ms/step - loss: 0.6344 - accuracy: 0.7698 - val_loss: 1.0563 - val_accuracy: 0.5831
Epoch 5/10
92/92 [==============================] - 51s 553ms/step - loss: 0.4266 - accuracy: 0.8539 - val_loss: 1.2350 - val_accuracy: 0.6104
Epoch 6/10
92/92 [==============================] - 51s 553ms/step - loss: 0.2695 - accuracy: 0.9087 - val_loss: 1.4331 - val_accuracy: 0.6213
Epoch 7/10
92/92 [==============================] - 51s 554ms/step - loss: 0.1561 - accuracy: 0.9533 - val_loss: 1.6564 - val_accuracy: 0.5981
Epoch 8/10
92/92 [==============================] - 51s 554ms/step - loss: 0.0725 - accuracy: 0.9785 - val_loss: 2.0034 - val_accuracy: 0.6281
Epoch 9/10
92/92 [==============================] - 51s 556ms/step - loss: 0.0440 - accuracy: 0.9888 - val_loss: 2.1280 - val_accuracy: 0.6076
Epoch 10/10
92/92 [==============================] - 51s 555ms/step - loss: 0.0194 - accuracy: 0.9973 - val_loss: 2.4841 - val_accuracy: 0.5886

从上面训练结果看,训练精度(accuracy)随时间线性增加,达到99%,而验证精度(val_accuracy)在60%左右。 训练精度和验证精度之间差异过大是过拟合现象。简单的说就是该模型在训练数据上能够获得比其他数据上训练效果更好,但是在训练数据外的数据集上却达不到效果, 意味着模型很难在新数据集上推广。

一般产生过拟合原因是:数据集样本数过少,样本中数据噪声干扰过大,学习模型过于复杂等。从这里的训练看,可能是训练数据集过少,模型记住了噪声等, 而忽略了真实的输入输出间的关系,导致过度拟合。

我们这里扩展数据集的规模,在Keras预处理层进行随机翻转、旋转、缩放图像,使用tf.keras.layers.RandomFlip, tf.keras.layers.RandomRotationtf.keras.layers.RandomZoom 接口。 另外,添加Dropout层,对于神经网络单元,按照一定的概率将其暂时从网络中丢弃。 程序示例如下:

tensorflow_classification.py(参考配套例程)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# 构建网络结构,添加data_augmentation,预处理层
data_augmentation = keras.Sequential(
    [
        layers.RandomFlip("horizontal",
                        input_shape=(img_height,
                                    img_width,
                                    3)),
        layers.RandomRotation(0.1),
        layers.RandomZoom(0.1),
    ]
)

model = Sequential([
    data_augmentation,    # 添加data_augmentation
    layers.Conv2D(16, 3, padding='same', activation='relu'),
    layers.MaxPooling2D(),
    layers.Conv2D(32, 3, padding='same', activation='relu'),
    layers.MaxPooling2D(),
    layers.Conv2D(64, 3, padding='same', activation='relu'),
    layers.MaxPooling2D(),
    layers.Dropout(0.2),   # 添加Dropout层
    layers.Flatten(),
    layers.Dense(128, activation='relu'),
    layers.Dense(num_classes, name="outputs")
])

简单测试结果:

Epoch 1/15
2023-02-20 19:39:40.074247: I tensorflow/stream_executor/cuda/cuda_dnn.cc:369] Loaded cuDNN version 8202
92/92 [==============================] - 115s 759ms/step - loss: 1.3321 - accuracy: 0.4179 - val_loss: 1.0552 - val_accuracy: 0.5872
Epoch 2/15
92/92 [==============================] - 58s 635ms/step - loss: 1.0306 - accuracy: 0.5978 - val_loss: 0.9923 - val_accuracy: 0.6267
Epoch 3/15
92/92 [==============================] - 58s 635ms/step - loss: 0.9465 - accuracy: 0.6345 - val_loss: 0.9616 - val_accuracy: 0.6240
Epoch 4/15
92/92 [==============================] - 61s 661ms/step - loss: 0.8700 - accuracy: 0.6604 - val_loss: 0.8555 - val_accuracy: 0.6717
Epoch 5/15
92/92 [==============================] - 61s 665ms/step - loss: 0.7899 - accuracy: 0.6993 - val_loss: 0.8514 - val_accuracy: 0.6717
Epoch 6/15
92/92 [==============================] - 57s 620ms/step - loss: 0.7789 - accuracy: 0.6979 - val_loss: 0.7658 - val_accuracy: 0.6907
Epoch 7/15
92/92 [==============================] - 59s 640ms/step - loss: 0.7197 - accuracy: 0.7159 - val_loss: 0.7940 - val_accuracy: 0.6826
Epoch 8/15
92/92 [==============================] - 59s 644ms/step - loss: 0.6956 - accuracy: 0.7367 - val_loss: 0.7710 - val_accuracy: 0.6907
Epoch 9/15
92/92 [==============================] - 60s 657ms/step - loss: 0.6532 - accuracy: 0.7510 - val_loss: 0.7250 - val_accuracy: 0.7112
Epoch 10/15
92/92 [==============================] - 62s 672ms/step - loss: 0.6189 - accuracy: 0.7657 - val_loss: 0.7893 - val_accuracy: 0.7044
Epoch 11/15
92/92 [==============================] - 60s 652ms/step - loss: 0.6221 - accuracy: 0.7674 - val_loss: 0.6982 - val_accuracy: 0.7262
Epoch 12/15
92/92 [==============================] - 60s 651ms/step - loss: 0.5598 - accuracy: 0.7888 - val_loss: 0.6821 - val_accuracy: 0.7357
Epoch 13/15
92/92 [==============================] - 60s 649ms/step - loss: 0.5519 - accuracy: 0.7885 - val_loss: 0.7939 - val_accuracy: 0.7084
Epoch 14/15
92/92 [==============================] - 61s 667ms/step - loss: 0.5387 - accuracy: 0.7997 - val_loss: 0.8331 - val_accuracy: 0.6880
Epoch 15/15
92/92 [==============================] - 60s 653ms/step - loss: 0.5068 - accuracy: 0.8151 - val_loss: 0.6627 - val_accuracy: 0.7466

经过在Keras预处理层和添加Dropout层后,训练后的

1.2.3. 测试模型

tensorflow_classification.py(参考配套例程)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
# 获取导入图片
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)

img = tf.keras.utils.load_img(
    sunflower_path, target_size=(img_height, img_width)
)
img_array = tf.keras.utils.img_to_array(img)
img_array = tf.expand_dims(img_array, 0)

# 测试
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])

# 输出结果
print(
    "This image most likely belongs to {} with a {:.2f} percent confidence."
    .format(class_names[np.argmax(score)], 100 * np.max(score))
)

1.2.4. TensorFlow Lite模型

经过训练的Keras Sequential模型,这里使用tf.lite.TFLiteConverter.from_Keras_model生成TensorFlow lite模型:

tensorflow_classification.py(参考配套例程)
1
2
3
4
5
6
7
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)

TensorFlow Lite转换器转换工作流介绍参考下 教程

1.2.5. 模型转换和模拟测试

使用rknn-Toolkit2导出RKNN模型:

rknn_transfer.py(参考配套例程)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
img_height = 180
img_width = 180
IMG_PATH = 'test.jpg'
class_names = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']

if __name__ == '__main__':

    # Create RKNN object
    #rknn = RKNN(verbose='Debug')
    rknn = RKNN()

    # Pre-process config
    print('--> Config model')
    rknn.config(mean_values=[0, 0, 0], std_values=[255, 255, 255], target_platform='rk3568')
    print('done')

    # Load model
    print('--> Loading model')
    ret = rknn.load_tflite(model='model.tflite')
    if ret != 0:
        print('Load model failed!')
        exit(ret)
    print('done')

    # Build model
    print('--> Building model')
    ret = rknn.build(do_quantization=False)
    #ret = rknn.build(do_quantization=True,dataset='./dataset.txt')
    if ret != 0:
        print('Build model failed!')
        exit(ret)
    print('done')

    # Export rknn model
    print('--> Export rknn model')
    ret = rknn.export_rknn('./model.rknn')
    if ret != 0:
        print('Export rknn model failed!')
        exit(ret)
    print('done')

#Init runtime environment
print('--> Init runtime environment')
ret = rknn.init_runtime()
#    if ret != 0:
#        print('Init runtime environment failed!')
#        exit(ret)
print('done')

img = cv2.imread(IMG_PATH)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img,(180,180))
img = np.expand_dims(img, 0)

#print('--> Accuracy analysis')
#rknn.accuracy_analysis(inputs=['./test.jpg'])
#print('done')

print('--> Running model')
outputs = rknn.inference(inputs=[img])
print(outputs)
outputs = tf.nn.softmax(outputs)
print(outputs)

print(
    "This image most likely belongs to {} with a {:.2f} percent confidence."
    .format(class_names[np.argmax(outputs)], 100 * np.max(outputs))
)
#print("图像预测是:", class_names[np.argmax(outputs)])
print('done')

rknn.release()

测试结果显示:

W __init__: rknn-toolkit2 version: 1.4.0-22dcfef4
--> Config model
done
--> Loading model
done
--> Building model
done
--> Export rknn model
done
--> Init runtime environment
W init_runtime: Target is None, use simulator!
done
--> Running model
Analysing : 100%|███████████████████████████████████████████████████| 18/18 [00:00<00:00, 90.12it/s]
Preparing : 100%|███████████████████████████████████████████████████| 18/18 [00:01<00:00, 13.98it/s]
[array([[-4.5390625 , -1.2275391 , -0.47338867,  4.75      ,  0.34350586]],
  dtype=float32)]
tf.Tensor([[[9.0598274e-05 2.4848275e-03 5.2822586e-03 9.8018610e-01 1.1956180e-02]]], shape=(1, 1, 5), dtype=float32)
This image most likely belongs to sunflowers with a 98.02 percent confidence.
done

1.3. 部署推理测试

在鲁班猫板卡上部署推理,使用rknn-toolkit-lite2,以RK356X为例:

rknnlite_inference.py(参考配套例程)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
IMG_PATH = 'test1.jpg'
RKNN_MODEL = 'model.rknn'
img_height = 180
img_width = 180
class_names = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']

# Create RKNN object
rknn_lite = RKNNLite()

# load RKNN model
print('--> Load RKNN model')
ret = rknn_lite.load_rknn(RKNN_MODEL)
if ret != 0:
    print('Load RKNN model failed')
    exit(ret)
print('done')

# Init runtime environment
print('--> Init runtime environment')
ret = rknn_lite.init_runtime()
if ret != 0:
    print('Init runtime environment failed!')
    exit(ret)
print('done')

# load image
img = cv2.imread(IMG_PATH)
img = cv2.resize(img,(180,180))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = np.expand_dims(img, 0)

# runing model
print('--> Running model')
outputs = rknn_lite.inference(inputs=[img])
print("result: ", outputs)
print(
    "This image most likely belongs to {}."
    .format(class_names[np.argmax(outputs)])
)

rknn_lite.release()

板卡上简单测试结果:

--> Load RKNN model
done
--> Init runtime environment
I RKNN: [17:28:01.594] RKNN Runtime Information: librknnrt version: 1.4.0 (a10f100eb@2022-09-09T09:07:14)
I RKNN: [17:28:01.594] RKNN Driver Information: version: 0.7.2
I RKNN: [17:28:01.595] RKNN Model Information: version: 1, toolkit version: 1.4.0-22dcfef4(compiler version: 1.4.0 (3b4520e4f@2022-09-05T20:52:35)), target: RKNPU lite, target platform: rk3568, framework name: TFLite, framework layout: NHWC
done
--> Running model
result:  [array([[-3.7226562, -1.2607422, -0.5805664,  3.5585938,  0.296875 ]],
    dtype=float32)]
This image most likely belongs to sunflowers.
done

1.4. 总结

使用Tensorflow完成了一个简单花卉图像分类,并将模型转换为RKNN模型,部署到鲁班猫板卡上,模型验证精度在70%多,用于简单学习, 该模型测试来自于Tensorflow官网的 图像分类教程