tensorflow2.3训练自己的分类数据集

tensorflow2.3训练自己的分类数据集

前言:Keras和TensorFlow都是深度学习的框架,但它们具有一些主要的区别。首先,Keras是基于TensorFlow的高级API,用纯Python编写,使得其代码更加简洁,适合初学者使用。然而,由于Keras在TensorFlow的基础上进行了再次封装,可能在运行速度上不如TensorFlow快。(本文实际使用的框架是Keras,但面对小型数据集时速度差异不大)

注意:本文当中的实验数据并无参考价值,在模型训练过程中存在较严重的过拟合问题,希望在过拟合问题上取的进展的兄弟可以和我一起探讨或者者不吝赐教。另,本文引用的部分代码来自其他文章,侵删。

一、基础知识

1、怎么区分人脸表情识别中的训练集、验证集、测试集

在人脸表情识别中,训练集、验证集和测试集的划分是一个重要的步骤。通常,数据集被划分为三个部分:

  1. 训练集:用于训练模型的数据。在这个例子中,Fer2013人脸表情数据集中的28708张图片被用作训练集。
  2. 验证集:用于调整模型参数和选择最佳模型的数据。在这个例子中,公共验证图(PublicTest)的3589张图片被用作验证集。
  3. 测试集:用于评估最终模型性能的数据。在这个例子中,私有验证图(PrivateTest)的3589张图片被用作测试集。

这种划分的目的是确保模型能够泛化到未见过的数据,并评估其在不同数据集上的性能。此外,为了提高模型的准确性和鲁棒性,还可以采用深度卷积神经网络、Resnet等不同的网络结构。在实际应用中,还可以根据需要调整训练集、验证集和测试集的比例,以便更好地平衡模型的训练和评估。(本文验证集和测试集共用val_scale=0.2)

2、数据集为什么都是csv形式

CSV文件格式被广泛用于数据集的存储和交换,主要原因有以下几点:

  1. CSV是纯文本文件,这使得数据交换变得容易,无论是在不同的软件、编程语言还是操作系统之间。
  2. CSV文件可以被轻松地导入到各种电子表格和数据库中,如Excel、SQL Server等。例如,在Python的pandas库中,我们可以使用pd.read_csv()函数来读取CSV文件。
  3. CSV文件结构清晰,易于理解和操作。每一行代表一个数据点,每一列代表一个特征,这使得数据的读取和处理变得简单。
  4. 对于大型数据集,CSV文件可以方便地进行分割,例如使用train_test_split函数在Python的sklearn库中将数据划分为训练集和测试集。

因此,CSV文件因其灵活性、易用性和广泛的兼容性,成为了数据集存储和交换的首选格式。

3、标准差(Standard Deviation,又常称均方差)

标准差是一个数字,描述值的离散程度。

例如:这次我们已经登记了 7 辆车的速度:

speed = [86,87,88,86,87,85,86]

标准差是:

0.9

意味着大多数值在平均值的 0.9 范围内,即 86.4。一些基础概念的模糊可能会导致后续训练模型时不懂得如何调试参数,所以务必对这些概念有清晰的理解

4、线性回归

线性回归模型可以预测未来,在机器学习中,预测未来非常重要

机器学习 - 线性回归 (w3school.com.cn)

绘制线性回归线:

import matplotlib.pyplot as plt  # 导入matplotlib库的pyplot模块,用于绘制图形
from scipy import stats  # 导入scipy库的stats模块,用于进行统计分析

x = [5,7,8,7,2,17,2,9,4,11,12,9,6]  # 定义x轴数据
y = [99,86,87,88,111,86,103,87,94,78,77,85,86]  # 定义y轴数据

slope, intercept, r, p, std_err = stats.linregress(x, y)  # 使用stats模块的linregress函数计算线性回归的斜率、截距、相关系数、p值和标准误差

def myfunc(x):  # 定义一个函数,用于计算线性回归模型的值
  return slope * x + intercept # y=k*x+b

mymodel = list(map(myfunc, x))  # 使用map函数将myfunc函数应用到x轴数据上,得到线性回归模型的y轴数据

plt.scatter(x, y)  # 绘制散点图,展示原始数据点
plt.plot(x, mymodel)  # 绘制线性回归模型的曲线
plt.show()  # 显示图形

img

二、训练整体流程

1、数据集收集
# -*- coding: utf-8 -*-
# @Time    : 2023/11/28 22:00
# @Author  : ####Jzh##
# @Email   : [email protected]
# @File    : get_data.py
# @Software: PyCharm
# @Brief   : 爬取百度图片

import requests
import re
import os

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36'}
name = input('请输入要爬取的图片类别:')
num = 0
num_1 = 0
num_2 = 0
x = input('请输入要爬取的图片数量?(1等于60张图片,2等于120张图片):')
list_1 = []
for i in range(int(x)):
    name_1 = os.getcwd()
    name_2 = os.path.join(name_1, 'data/' + name)
    url = 'https://image.baidu.com/search/flip?tn=baiduimage&ie=utf-8&word=' + name + '&pn=' + str(i * 30)
    res = requests.get(url, headers=headers)
    htlm_1 = res.content.decode()
    a = re.findall('"objURL":"(.*?)",', htlm_1)
    if not os.path.exists(name_2):
        os.makedirs(name_2)
    for b in a:
        try:
            b_1 = re.findall('https:(.*?)&', b)
            b_2 = ''.join(b_1)
            if b_2 not in list_1:
                num = num + 1
                img = requests.get(b)
                f = open(os.path.join(name_1, 'data/' + name, name + str(num) + '.jpg'), 'ab')
                print('---------正在下载第' + str(num) + '张图片----------')
                f.write(img.content)
                f.close()
                list_1.append(b_2)
            elif b_2 in list_1:
                num_1 = num_1 + 1
                continue
        except Exception as e:
            print('---------第' + str(num) + '张图片无法下载----------')
            num_2 = num_2 + 1
            continue

print('下载完成,总共下载{}张,成功下载:{}张,重复下载:{}张,下载失败:{}张'.format(num + num_1 + num_2, num, num_1, num_2))

2、数据集整理

(1)图片数据划分

# -*- coding: utf-8 -*-
# @Time    : 2023/11/28 22:39
# @Author  : ####Jzh##
# @Email   : [email protected]
# @File    : data_split.py
# @Software: PyCharm
# @Brief   : 将(图片)数据集划分为训练集、验证集和测试集
import os
import random
from shutil import copy2


def data_set_split(src_data_folder, target_data_folder, train_scale=0.8, val_scale=0.2, test_scale=0.0):
    '''
    读取源数据文件夹,生成划分好的文件夹,分为trian、val、test三个文件夹进行
    :param src_data_folder: 源文件夹 D:/code/CNN/data
    :param target_data_folder: 目标文件夹 D:/code/CNN/data
    :param train_scale: 训练集比例
    :param val_scale: 验证集比例
    :param test_scale: 测试集比例
    :return:
    '''
    print("开始数据集划分")
    class_names = os.listdir(src_data_folder)
    # 在目标目录下创建文件夹
    split_names = ['train', 'val', 'test']
    for split_name in split_names:
        split_path = os.path.join(target_data_folder, split_name)
        if os.path.isdir(split_path):
            pass
        else:
            os.mkdir(split_path)
        # 然后在split_path的目录下创建类别文件夹
        for class_name in class_names:
            class_split_path = os.path.join(split_path, class_name)
            if os.path.isdir(class_split_path):
                pass
            else:
                os.mkdir(class_split_path)

    # 按照比例划分数据集,并进行数据图片的复制
    # 首先进行分类遍历
    for class_name in class_names:
        current_class_data_path = os.path.join(src_data_folder, class_name)
        current_all_data = os.listdir(current_class_data_path)
        current_data_length = len(current_all_data)
        current_data_index_list = list(range(current_data_length))
        random.shuffle(current_data_index_list)

        train_folder = os.path.join(os.path.join(target_data_folder, 'train'), class_name)
        val_folder = os.path.join(os.path.join(target_data_folder, 'val'), class_name)
        test_folder = os.path.join(os.path.join(target_data_folder, 'test'), class_name)
        train_stop_flag = current_data_length * train_scale
        val_stop_flag = current_data_length * (train_scale + val_scale)
        current_idx = 0
        train_num = 0
        val_num = 0
        test_num = 0
        for i in current_data_index_list:
            src_img_path = os.path.join(current_class_data_path, current_all_data[i])
            if current_idx <= train_stop_flag:
                copy2(src_img_path, train_folder)
                # print("{}复制到了{}".format(src_img_path, train_folder))
                train_num = train_num + 1
            elif (current_idx > train_stop_flag) and (current_idx <= val_stop_flag):
                copy2(src_img_path, val_folder)
                # print("{}复制到了{}".format(src_img_path, val_folder))
                val_num = val_num + 1
            else:
                copy2(src_img_path, test_folder)
                # print("{}复制到了{}".format(src_img_path, test_folder))
                test_num = test_num + 1

            current_idx = current_idx + 1

        print("*********************************{}*************************************".format(class_name))
        print(
            "{}类按照{}:{}:{}的比例划分完成,一共{}张图片".format(class_name, train_scale, val_scale, test_scale, current_data_length))
        print("训练集{}:{}张".format(train_folder, train_num))
        print("验证集{}:{}张".format(val_folder, val_num))
        print("测试集{}:{}张".format(test_folder, test_num))


if __name__ == '__main__':
    src_data_folder = "D:/code/CNN/data"   # todo 修改你的原始数据集路径
    target_data_folder = "D:/code/CNN/data"  # todo 修改为你要存放的路径
    data_set_split(src_data_folder, target_data_folder)
    

(2)CSV数据划分

import pandas as pd  # 导入pandas库,用于数据处理
import numpy as np  # 导入numpy库,用于数值计算

##path 为数据集路径
def dataRead(path, x_data, y_data, data_size_begin, data_size_end):
    # 加载训练集
    train_data = pd.read_csv(path)  # 使用pandas读取CSV文件
    num_of_instances = len(train_data)  # 获取训练集的实例数量
    min_data = train_data.iloc[data_size_begin:data_size_end]  # 获取指定范围的训练数据
    pixels = min_data['pixels']  # 获取像素数据列
    emotions = min_data['emotion']  # 获取表情类别列
    print("数据集加载完成,数据集大小")
    print(len(pixels))  # 打印像素数据的数量

    # 表情类别数
    num_classes = 7  # 表情类别数为7(包括愤怒、厌恶、高兴、悲伤、恐惧、惊讶和中性)
    # x_train, y_train = [], []
    # x_test, y_test = [], []
    import os  # 导入os库,用于操作系统相关操作
    import keras  # 导入keras库,用于深度学习模型构建

    for emotion, img in zip(emotions, pixels):  # 遍历表情类别和像素数据
        try:
            emotion = keras.utils.to_categorical(emotion, num_classes)  # 将表情类别转换为独热向量编码
            val = img.split(" ")  # 将像素数据按空格分割成列表
            pixels = np.array(val, 'float32')  # 将像素数据转换为浮点数类型的NumPy数组
            x_data.append(pixels)  # 将处理后的像素数据添加到x_data列表中
            y_data.append(emotion)  # 将独热向量编码的表情类别添加到y_data列表中
        except:
            print("111")  # 如果发生异常,打印错误信息

    print("表情 分类完成 finish")
    print(len(x_data))  # 打印x_data列表的长度

    x_data = np.array(x_data)  # 将x_data列表转换为NumPy数组
    y_data = np.array(y_data)  # 将y_data列表转换为NumPy数组
    x_data = x_data.reshape(-1, 48, 48, 1)  # 将x_data数组的形状调整为(-1, 48, 48, 1),即(样本数, 高度, 宽度, 通道数)
    print("数据集 格式转换完成")
    print(len(x_data))  # 打印x_data数组的长度
    res = [];
    res.append(x_data)  # 将x_data数组添加到res列表中
    res.append(y_data)  # 将y_data数组添加到res列表中
    return res;  # 返回res列表

3、训练模型
# -*- coding: utf-8 -*-
# @Time    : 2023/11/29 19:13
# @Author  : ####Jzh##
# @Email   : [email protected]
# @File    : train_cnn.py
# @Software: PyCharm
# @Brief   : cnn模型训练代码,训练的代码会保存在models目录下,折线图会保存在results目录下

import tensorflow as tf
import matplotlib.pyplot as plt
from time import *


# 数据集加载函数,指明数据集的位置并统一处理为imgheight*imgwidth的大小,同时设置batch
# data_dir:训练集, test_data_dir:测试集
def data_load(data_dir, test_data_dir, img_height, img_width, batch_size):
    # 加载训练集
    train_ds = tf.keras.preprocessing.image_dataset_from_directory(
        data_dir,
        label_mode='categorical',
        seed=123,
        image_size=(img_height, img_width),
        batch_size=batch_size)
    # 加载测试集
    val_ds = tf.keras.preprocessing.image_dataset_from_directory(
        test_data_dir,
        label_mode='categorical',
        seed=123,
        image_size=(img_height, img_width),
        batch_size=batch_size)
    class_names = train_ds.class_names
    # 返回处理之后的训练集、验证集和类名
    return train_ds, val_ds, class_names


# 构建CNN模型
def model_load(IMG_SHAPE=(224, 224, 3), class_num=12):
    # 搭建模型
    model = tf.keras.models.Sequential([
        # 对模型做归一化的处理,将0-255之间的数字统一处理到0到1之间
        tf.keras.layers.experimental.preprocessing.Rescaling(1. / 255, input_shape=IMG_SHAPE),
        # 卷积层,该卷积层的输出为32个通道,卷积核的大小是3*3,激活函数为relu
        tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
        # 添加池化层,池化的kernel大小是2*2
        tf.keras.layers.MaxPooling2D(2, 2),
        # Add another convolution
        # 卷积层,输出为64个通道,卷积核大小为3*3,激活函数为relu
        tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
        # 池化层,最大池化,对2*2的区域进行池化操作
        tf.keras.layers.MaxPooling2D(2, 2),
        # 将二维的输出转化为一维 这对于卷积神经网络(CNN)中的全连接层非常有用,因为全连接层需要一维的输入数据。
        tf.keras.layers.Flatten(),
        # The same 128 dense layers, and 10 output layers as in the pre-convolution example:
        # 这是一个全连接层,有128个神经元,激活函数为ReLU(Rectified Linear Unit)。
        # ReLU是一种常用的激活函数,它可以增加模型的非线性,使得模型能够学习更复杂的模式。
        tf.keras.layers.Dense(128, activation='relu'),
        # 这是另一个全连接层,有class_num个神经元,激活函数为Softmax。
        # Softmax函数可以将输出转化为概率分布,使得每个类别的概率之和为1。
        # 这在多分类问题中非常有用,因为它可以生成每个类别的预测概率。
        tf.keras.layers.Dense(class_num, activation='softmax')
    ])
    # 输出模型信息
    model.summary()
    # 指明模型的训练参数,优化器为sgd优化器,损失函数为交叉熵损失函数
    model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])
    # 返回模型
    return model


# 展示训练过程的曲线
def show_loss_acc(history):
    # 从history中提取模型训练集和验证集准确率信息和误差信息
    acc = history.history['accuracy']
    val_acc = history.history['val_accuracy']
    loss = history.history['loss']
    val_loss = history.history['val_loss']

    # 按照上下结构将图画输出
    plt.figure(figsize=(8, 8))
    plt.subplot(2, 1, 1)
    plt.plot(acc, label='Training Accuracy')
    plt.plot(val_acc, label='Validation Accuracy')
    plt.legend(loc='lower right')
    plt.ylabel('Accuracy')
    plt.ylim([min(plt.ylim()), 1])
    plt.title('Training and Validation Accuracy')

    plt.subplot(2, 1, 2)
    plt.plot(loss, label='Training Loss')
    plt.plot(val_loss, label='Validation Loss')
    plt.legend(loc='upper right')
    plt.ylabel('Cross Entropy')
    plt.title('Training and Validation Loss')
    plt.xlabel('epoch')
    plt.savefig('results/results_cnn.png', dpi=100)


def train(epochs):
    # 开始训练,记录开始时间
    begin_time = time()
    # todo 加载数据集, 修改为你的数据集的路径
    train_ds, val_ds, class_names = data_load("D:/code/CNN/data/CK+48/split_data/train",
                                              "D:/code/CNN/data/CK+48/split_data/val", 224, 224, 16)
    print(class_names)
    # 加载模型
    model = model_load(class_num=len(class_names))
    # 指明训练的轮数epoch,开始训练
    history = model.fit(train_ds, validation_data=val_ds, epochs=epochs)
    # todo 保存模型, 修改为你要保存的模型的名称
    model.save("models/cnn_ck+48.h5")
    # 记录结束时间
    end_time = time()
    run_time = end_time - begin_time
    print('该循环程序运行时间:', run_time, "s")  # 该循环程序运行时间: 1.4201874732
    # 绘制模型训练过程图
    show_loss_acc(history)


if __name__ == '__main__':
    train(epochs=30)
    
4、测试(验证)模型
# -*- coding: utf-8 -*-
# @Time    : 2023/11/29 19:15
# @Author  : ####Jzh##
# @Email   : [email protected]
# @File    : test_model.py
# @Software: PyCharm
# @Brief   : 模型测试代码,测试会生成热力图,热力图会保存在results目录下

import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np

plt.rcParams['font.family'] = ['sans-serif']
plt.rcParams['font.sans-serif'] = ['SimHei']


# 数据加载,分别从训练的数据集的文件夹和测试的文件夹中加载训练集和验证集
def data_load(data_dir, test_data_dir, img_height, img_width, batch_size):
    # 加载训练集
    train_ds = tf.keras.preprocessing.image_dataset_from_directory(
        data_dir,
        label_mode='categorical',
        seed=123,
        image_size=(img_height, img_width),
        batch_size=batch_size)
    # 加载测试集
    val_ds = tf.keras.preprocessing.image_dataset_from_directory(
        test_data_dir,
        label_mode='categorical',
        seed=123,
        image_size=(img_height, img_width),
        batch_size=batch_size)
    class_names = train_ds.class_names
    # 返回处理之后的训练集、验证集和类名
    return train_ds, val_ds, class_names


# 测试mobilenet准确率
def test_mobilenet():
    # todo 加载数据, 修改为你自己的数据集的路径
    train_ds, test_ds, class_names = data_load("D:/code/CNN/data/CK+48/split_data/train",
                                               "D:/code/CNN/data/CK+48/split_data/val", 224, 224, 16)
    # todo 加载模型,修改为你的模型名称
    model = tf.keras.models.load_model("models/mobilenet_fv.h5")
    # model.summary()
    # 测试
    loss, accuracy = model.evaluate(test_ds)
    # 输出结果
    print('Mobilenet test accuracy :', accuracy)

    test_real_labels = []
    test_pre_labels = []
    for test_batch_images, test_batch_labels in test_ds:
        test_batch_labels = test_batch_labels.numpy()
        test_batch_pres = model.predict(test_batch_images)
        # print(test_batch_pres)

        test_batch_labels_max = np.argmax(test_batch_labels, axis=1)
        test_batch_pres_max = np.argmax(test_batch_pres, axis=1)
        # print(test_batch_labels_max)
        # print(test_batch_pres_max)
        # 将推理对应的标签取出
        for i in test_batch_labels_max:
            test_real_labels.append(i)

        for i in test_batch_pres_max:
            test_pre_labels.append(i)
        # break

    # print(test_real_labels)
    # print(test_pre_labels)
    class_names_length = len(class_names)
    heat_maps = np.zeros((class_names_length, class_names_length))
    for test_real_label, test_pre_label in zip(test_real_labels, test_pre_labels):
        heat_maps[test_real_label][test_pre_label] = heat_maps[test_real_label][test_pre_label] + 1

    print(heat_maps)
    heat_maps_sum = np.sum(heat_maps, axis=1).reshape(-1, 1)
    # print(heat_maps_sum)
    print()
    heat_maps_float = heat_maps / heat_maps_sum
    print(heat_maps_float)
    # title, x_labels, y_labels, harvest
    show_heatmaps(title="heatmap", x_labels=class_names, y_labels=class_names, harvest=heat_maps_float,
                  save_name="results/heatmap_mobilenet.png")


# 测试cnn模型准确率
def test_cnn():
    # todo 加载数据, 修改为你自己的数据集的路径
    train_ds, test_ds, class_names = data_load("D:/code/CNN/data/CK+48/split_data/train",
                                               "D:/code/CNN/data/CK+48/split_data/val", 224, 224, 16)
    # todo 加载模型,修改为你的模型名称
    model = tf.keras.models.load_model("models/cnn_ck+48.h5")
    # model.summary()
    # 测试
    loss, accuracy = model.evaluate(test_ds)
    # 输出结果
    print('CNN test accuracy :', accuracy)

    # 对模型分开进行推理
    test_real_labels = []
    test_pre_labels = []
    for test_batch_images, test_batch_labels in test_ds:
        test_batch_labels = test_batch_labels.numpy()
        test_batch_pres = model.predict(test_batch_images)
        # print(test_batch_pres)

        test_batch_labels_max = np.argmax(test_batch_labels, axis=1)
        test_batch_pres_max = np.argmax(test_batch_pres, axis=1)
        # print(test_batch_labels_max)
        # print(test_batch_pres_max)
        # 将推理对应的标签取出
        for i in test_batch_labels_max:
            test_real_labels.append(i)

        for i in test_batch_pres_max:
            test_pre_labels.append(i)
        # break

    # print(test_real_labels)
    # print(test_pre_labels)
    class_names_length = len(class_names)
    heat_maps = np.zeros((class_names_length, class_names_length))
    for test_real_label, test_pre_label in zip(test_real_labels, test_pre_labels):
        heat_maps[test_real_label][test_pre_label] = heat_maps[test_real_label][test_pre_label] + 1

    print(heat_maps)
    heat_maps_sum = np.sum(heat_maps, axis=1).reshape(-1, 1)
    # print(heat_maps_sum)
    print()
    heat_maps_float = heat_maps / heat_maps_sum
    print(heat_maps_float)
    # title, x_labels, y_labels, harvest
    show_heatmaps(title="heatmap", x_labels=class_names, y_labels=class_names, harvest=heat_maps_float,
                  save_name="results/heatmap_cnn.png")


def show_heatmaps(title, x_labels, y_labels, harvest, save_name):
    # 这里是创建一个画布
    fig, ax = plt.subplots()
    # cmap https://blog.csdn.net/ztf312/article/details/102474190
    im = ax.imshow(harvest, cmap="OrRd")
    # 这里是修改标签
    # We want to show all ticks...
    ax.set_xticks(np.arange(len(y_labels)))
    ax.set_yticks(np.arange(len(x_labels)))
    # ... and label them with the respective list entries
    ax.set_xticklabels(y_labels)
    ax.set_yticklabels(x_labels)

    # 因为x轴的标签太长了,需要旋转一下,更加好看
    # Rotate the tick labels and set their alignment.
    plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
             rotation_mode="anchor")

    # 添加每个热力块的具体数值
    # Loop over data dimensions and create text annotations.
    for i in range(len(x_labels)):
        for j in range(len(y_labels)):
            text = ax.text(j, i, round(harvest[i, j], 2),
                           ha="center", va="center", color="black")
    ax.set_xlabel("Predict label")
    ax.set_ylabel("Actual label")
    ax.set_title(title)
    fig.tight_layout()
    plt.colorbar(im)
    plt.savefig(save_name, dpi=100)
    # plt.show()


if __name__ == '__main__':
    # test_mobilenet()
    test_cnn()
    
5、代码详解
1、卷积:
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),

这是一个使用TensorFlow的Keras库创建卷积层的代码。具体解释如下:
tf.keras.layers.Conv2D: 这是TensorFlow中用于创建卷积层的函数。
32: 这是卷积核的数量,也就是输出空间的维度。
(3, 3): 这是卷积核的大小,即在输入数据上滑动窗口时,窗口的宽度和高度都是3。
activation='relu': 这是激活函数的类型,这里使用的是ReLU(Rectified Linear Unit)激活函数。


2、池化:
tf.keras.layers.MaxPooling2D(2, 2),

这是一个使用TensorFlow的Keras库创建最大池化层的代码。具体解释如下:
tf.keras.layers.MaxPooling2D: 这是TensorFlow中用于创建最大池化层的函数。
2, 2: 这是池化窗口的大小,即在输入数据上滑动窗口时,窗口的宽度和高度都是2。


3、损失函数:
model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])

这是一个使用Keras库编译神经网络模型的代码。它使用了随机梯度下降(SGD)优化器、分类交叉熵损失函数和准确率作为评估指标。SGD优化器是一种迭代方法,用于求解损失函数的最小值(可以防止过分追求损失函数最小值造成过拟合)。
4、激活函数:
ReLU是一种常用的激活函数,它可以增加模型的非线性,使得模型能够学习更复杂的模式。
6、改进模型

当训练集的loss持续下降,而测试集的loss不断上升时,可能是以下情况:

过拟合:这是一个常见的问题。当模型在训练集上表现得很好,但在测试集上的表现却不佳时,可能是因为模型过度拟合了训练数据。这意味着模型学到了训练数据中的噪声和细节,但失去了对新数据的泛化能力。可以使用正则化技术,如L1或L2正则化,使权值衰减来减少过拟合的风险(本次实验该方法尚未验证)。

L

1

c

o

s

t

=

W

x

?

r

e

a

l

y

2

+

a

b

s

(

W

)

L1:cost = (Wx - realy)2 + abs(W)

L1:cost=(Wx?realy)2+abs(W)

L

2

c

o

s

t

=

W

x

?

r

e

a

l

y

2

+

(

W

)

2

L2:cost = (Wx - realy)2 + (W)2

L2:cost=(Wx?realy)2+(W)2

L

3.....

(

W

)

3

L3.....(W)3

L3.....(W)3

这是一种惩罚机制,y=Wx,W为机器学习学习到的各种参数,W往往比较大,为了让他不要过于自信变化的太大,将他也考虑进误差值的计算中去,如上述公式所示。

我对过拟合的理解:过拟合就像是一个自负的人,活在自己的圈子里。即对自己圈子里的表情识别很准确,但是坐井观天,当实战中识别其他圈子里的情绪却是一塌糊涂(不能很好表达除了训练数据以外的数据!)。

在这里插入图片描述

7、使用模型
# -*- coding: utf-8 -*-
# @Time    : 2023/11/29 19:29
# @Author  : ####Jzh##
# @Email   : [email protected]
# @File    : window.py
# @Software: PyCharm
# @Brief   : 图形化界面

import tensorflow as tf
from PyQt5.QtGui import *
from PyQt5.QtCore import *
from PyQt5.QtWidgets import *
import sys
import cv2
from PIL import Image
import numpy as np
import shutil


class MainWindow(QTabWidget):
    # 初始化
    def __init__(self):
        super().__init__()
        self.setWindowIcon(QIcon('images/logo.png'))
        self.setWindowTitle('情绪识别系统')  # todo 修改系统名称
        # 模型初始化
        self.model = tf.keras.models.load_model("models/cnn_ck+48.h5")  # todo 修改模型名称
        self.to_predict_name = "images/Jzh.png"  # todo 修改初始图片,这个图片要放在images目录下
        self.class_names = ['anger', 'contempt', 'disgust', 'fear', 'happy', 'sadness', 'surprise']  # todo 修改类名,这个数组在模型训练的开始会输出
        self.resize(900, 700)
        self.initUI()

    # 界面初始化,设置界面布局
    def initUI(self):
        main_widget = QWidget()
        main_layout = QHBoxLayout()
        font = QFont('楷体', 15)

        # 主页面,设置组件并在组件放在布局上
        left_widget = QWidget()
        left_layout = QVBoxLayout()
        img_title = QLabel("Sample")
        img_title.setFont(font)
        img_title.setAlignment(Qt.AlignCenter)
        self.img_label = QLabel()
        img_init = cv2.imread(self.to_predict_name)
        h, w, c = img_init.shape
        scale = 400 / h
        img_show = cv2.resize(img_init, (0, 0), fx=scale, fy=scale)
        cv2.imwrite("images/show.png", img_show)
        img_init = cv2.resize(img_init, (224, 224))
        cv2.imwrite('images/target.png', img_init)
        self.img_label.setPixmap(QPixmap("images/show.png"))
        left_layout.addWidget(img_title)
        left_layout.addWidget(self.img_label, 1, Qt.AlignCenter)
        left_widget.setLayout(left_layout)
        right_widget = QWidget()
        right_layout = QVBoxLayout()
        btn_change = QPushButton(" Upload Pictures ")
        btn_change.clicked.connect(self.change_img)
        btn_change.setFont(font)
        btn_predict = QPushButton(" Enable it ")
        btn_predict.setFont(font)
        btn_predict.clicked.connect(self.predict_img)
        label_result = QLabel(' Emotional categories ')
        self.result = QLabel("Waiting for recognition")
        label_result.setFont(QFont('Arial', 16))
        self.result.setFont(QFont('Arial', 24))
        right_layout.addStretch()
        right_layout.addWidget(label_result, 0, Qt.AlignCenter)
        right_layout.addStretch()
        right_layout.addWidget(self.result, 0, Qt.AlignCenter)
        right_layout.addStretch()
        right_layout.addStretch()
        right_layout.addWidget(btn_change)
        right_layout.addWidget(btn_predict)
        right_layout.addStretch()
        right_widget.setLayout(right_layout)
        main_layout.addWidget(left_widget)
        main_layout.addWidget(right_widget)
        main_widget.setLayout(main_layout)

        # 关于页面,设置组件并把组件放在布局上
        about_widget = QWidget()
        about_layout = QVBoxLayout()
        about_title = QLabel('Welcome to the emotion recognition system')  # todo 修改欢迎词语
        about_title.setFont(QFont('Arial', 18))
        about_title.setAlignment(Qt.AlignCenter)
        about_img = QLabel()
        about_img.setPixmap(QPixmap('images/welcome.png'))
        about_img.setAlignment(Qt.AlignCenter)
        label_super = QLabel("Author:Jzh")  # todo 更换作者信息
        label_super.setFont(QFont('Arial', 12))
        # label_super.setOpenExternalLinks(True)
        label_super.setAlignment(Qt.AlignRight)
        about_layout.addWidget(about_title)
        about_layout.addStretch()
        about_layout.addWidget(about_img)
        about_layout.addStretch()
        about_layout.addWidget(label_super)
        about_widget.setLayout(about_layout)

        # 添加注释
        self.addTab(main_widget, 'homepage')
        self.addTab(about_widget, 'about')
        self.setTabIcon(0, QIcon('images/主页面.png'))
        self.setTabIcon(1, QIcon('images/关于.png'))

    # 上传并显示图片
    def change_img(self):
        openfile_name = QFileDialog.getOpenFileName(self, 'chose files', '',
                                                    'Image files(*.jpg *.png *jpeg)')  # 打开文件选择框选择文件
        img_name = openfile_name[0]  # 获取图片名称
        if img_name == '':
            pass
        else:
            target_image_name = "images/tmp_up." + img_name.split(".")[-1]  # 将图片移动到当前目录
            shutil.copy(img_name, target_image_name)
            self.to_predict_name = target_image_name
            img_init = cv2.imread(self.to_predict_name)  # 打开图片
            h, w, c = img_init.shape
            scale = 400 / h
            img_show = cv2.resize(img_init, (0, 0), fx=scale, fy=scale)  # 将图片的大小统一调整到400的高,方便界面显示
            cv2.imwrite("images/show.png", img_show)
            img_init = cv2.resize(img_init, (224, 224))  # 将图片大小调整到224*224用于模型推理
            cv2.imwrite('images/target.png', img_init)
            self.img_label.setPixmap(QPixmap("images/show.png"))
            self.result.setText("等待识别")

    # 预测图片
    def predict_img(self):
        img = Image.open('images/target.png')  # 读取图片
        img = np.asarray(img)  # 将图片转化为numpy的数组
        outputs = self.model.predict(img.reshape(1, 224, 224, 3))  # 将图片输入模型得到结果
        result_index = int(np.argmax(outputs))
        result = self.class_names[result_index]  # 获得对应的水果名称
        self.result.setText(result)  # 在界面上做显示

    # 界面关闭事件,询问用户是否关闭
    def closeEvent(self, event):
        reply = QMessageBox.question(self,
                                     '退出',
                                     "是否要退出程序?",
                                     QMessageBox.Yes | QMessageBox.No,
                                     QMessageBox.No)
        if reply == QMessageBox.Yes:
            self.close()
            event.accept()
        else:
            event.ignore()


if __name__ == "__main__":
    app = QApplication(sys.argv)
    x = MainWindow()
    x.show()
    sys.exit(app.exec_())
    

<div class="section-content"></div>

三、训练过程中的数据收集

1、CK+数据集划分

(1)CK+48数据集

D:softwareanacondaAnaenvs fjzhpython.exe D:codeCNNTensorflowvegetables_tf2.3-masterdata_split.py
开始数据集划分
anger****
anger类按照0.8:0.2:0.0的比例划分完成,一共135张图片
训练集D:/code/CNN/data rainanger:109张
验证集D:/code/CNN/datavalanger:26张
测试集D:/code/CNN/data estanger:0张
contempt****
contempt类按照0.8:0.2:0.0的比例划分完成,一共54张图片
训练集D:/code/CNN/data raincontempt:44张
验证集D:/code/CNN/datavalcontempt:10张
测试集D:/code/CNN/data estcontempt:0张
disgust****
disgust类按照0.8:0.2:0.0的比例划分完成,一共177张图片
训练集D:/code/CNN/data raindisgust:142张
验证集D:/code/CNN/datavaldisgust:35张
测试集D:/code/CNN/data estdisgust:0张
fear****
fear类按照0.8:0.2:0.0的比例划分完成,一共75张图片
训练集D:/code/CNN/data rainfear:61张
验证集D:/code/CNN/datavalfear:14张
测试集D:/code/CNN/data estfear:0张
happy****
happy类按照0.8:0.2:0.0的比例划分完成,一共207张图片
训练集D:/code/CNN/data rainhappy:166张
验证集D:/code/CNN/datavalhappy:41张
测试集D:/code/CNN/data esthappy:0张
sadness****
sadness类按照0.8:0.2:0.0的比例划分完成,一共84张图片
训练集D:/code/CNN/data rainsadness:68张
验证集D:/code/CNN/datavalsadness:16张
测试集D:/code/CNN/data estsadness:0张
surprise****
surprise类按照0.8:0.2:0.0的比例划分完成,一共249张图片
训练集D:/code/CNN/data rainsurprise:200张
验证集D:/code/CNN/datavalsurprise:49张
测试集D:/code/CNN/data estsurprise:0张

<div class="section-content"></div>

(2)emotion-domestic数据集

emotion-domestic数据集已被划分为train和test两个集合

<div class="section-content"></div>

2、CNN模型训练

(1)CK+48数据集

D:softwareanacondaAnaenvs fjzhpython.exe D:codeCNNTensorflowvegetables_tf2.3-master rain_cnn.py
Found 790 files belonging to 8 classes.
2023-11-29 16:16:25.640883: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-11-29 16:16:25.657356: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x22148d76660 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2023-11-29 16:16:25.657489: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Found 191 files belonging to 8 classes.
[‘anger’, ‘contempt’, ‘disgust’, ‘fear’, ‘happy’, ‘sadness’, ‘surprise’, ‘注意.txt’]
Model: “sequential”


Layer (type) Output Shape Param #

rescaling (Rescaling) (None, 224, 224, 3) 0


conv2d (Conv2D) (None, 222, 222, 32) 896


max_pooling2d (MaxPooling2D) (None, 111, 111, 32) 0


conv2d_1 (Conv2D) (None, 109, 109, 64) 18496


max_pooling2d_1 (MaxPooling2 (None, 54, 54, 64) 0


flatten (Flatten) (None, 186624) 0


dense (Dense) (None, 128) 23888000


dense_1 (Dense) (None, 8) 1032

Total params: 23,908,424
Trainable params: 23,908,424
Non-trainable params: 0


Epoch 1/30
50/50 [] - 22s 438ms/step - loss: 1.9303 - accuracy: 0.2911 - val_loss: 1.8007 - val_accuracy: 0.4660
Epoch 2/30
50/50 [
] - 23s 457ms/step - loss: 1.6601 - accuracy: 0.4114 - val_loss: 2.2456 - val_accuracy: 0.2565
Epoch 3/30
50/50 [] - 22s 442ms/step - loss: 1.5892 - accuracy: 0.4165 - val_loss: 1.3964 - val_accuracy: 0.4660
Epoch 4/30
50/50 [
] - 26s 514ms/step - loss: 1.2503 - accuracy: 0.5582 - val_loss: 1.2911 - val_accuracy: 0.5707
Epoch 5/30
50/50 [] - 25s 491ms/step - loss: 1.0965 - accuracy: 0.6000 - val_loss: 1.2392 - val_accuracy: 0.5707
Epoch 6/30
50/50 [
] - 24s 475ms/step - loss: 1.0049 - accuracy: 0.6532 - val_loss: 2.0597 - val_accuracy: 0.3770
Epoch 7/30
50/50 [] - 23s 456ms/step - loss: 0.9175 - accuracy: 0.6810 - val_loss: 0.9478 - val_accuracy: 0.6440
Epoch 8/30
50/50 [
] - 23s 468ms/step - loss: 0.7228 - accuracy: 0.7481 - val_loss: 1.0766 - val_accuracy: 0.6178
Epoch 9/30
50/50 [] - 24s 470ms/step - loss: 0.7094 - accuracy: 0.7620 - val_loss: 0.5544 - val_accuracy: 0.8534
Epoch 10/30
50/50 [
] - 25s 507ms/step - loss: 0.5451 - accuracy: 0.8051 - val_loss: 0.7986 - val_accuracy: 0.7330
Epoch 11/30
50/50 [] - 25s 504ms/step - loss: 0.5583 - accuracy: 0.8291 - val_loss: 0.6646 - val_accuracy: 0.7958
Epoch 12/30
50/50 [
] - 25s 502ms/step - loss: 0.5343 - accuracy: 0.8468 - val_loss: 0.5115 - val_accuracy: 0.8010
Epoch 13/30
50/50 [] - 25s 494ms/step - loss: 0.4917 - accuracy: 0.8519 - val_loss: 0.5664 - val_accuracy: 0.8063
Epoch 14/30
50/50 [
] - 25s 493ms/step - loss: 0.3818 - accuracy: 0.8684 - val_loss: 0.4623 - val_accuracy: 0.8325
Epoch 15/30
50/50 [] - 25s 495ms/step - loss: 0.4484 - accuracy: 0.8684 - val_loss: 0.4187 - val_accuracy: 0.8639
Epoch 16/30
50/50 [
] - 24s 486ms/step - loss: 0.2996 - accuracy: 0.9177 - val_loss: 0.2799 - val_accuracy: 0.8901
Epoch 17/30
50/50 [] - 23s 453ms/step - loss: 0.2518 - accuracy: 0.9089 - val_loss: 1.6127 - val_accuracy: 0.6754
Epoch 18/30
50/50 [
] - 23s 456ms/step - loss: 0.3216 - accuracy: 0.9013 - val_loss: 1.3081 - val_accuracy: 0.6911
Epoch 19/30
50/50 [] - 23s 453ms/step - loss: 0.2166 - accuracy: 0.9380 - val_loss: 0.2863 - val_accuracy: 0.8743
Epoch 20/30
50/50 [
] - 23s 459ms/step - loss: 0.1956 - accuracy: 0.9430 - val_loss: 0.1953 - val_accuracy: 0.9319
Epoch 21/30
50/50 [] - 22s 438ms/step - loss: 0.3971 - accuracy: 0.9038 - val_loss: 0.2676 - val_accuracy: 0.9424
Epoch 22/30
50/50 [
] - 22s 434ms/step - loss: 0.2543 - accuracy: 0.9152 - val_loss: 0.2268 - val_accuracy: 0.9372
Epoch 23/30
50/50 [] - 23s 460ms/step - loss: 0.2061 - accuracy: 0.9342 - val_loss: 0.5493 - val_accuracy: 0.7906
Epoch 24/30
50/50 [
] - 22s 439ms/step - loss: 0.1750 - accuracy: 0.9494 - val_loss: 0.1761 - val_accuracy: 0.9424
Epoch 25/30
50/50 [] - 22s 432ms/step - loss: 0.1815 - accuracy: 0.9481 - val_loss: 0.2034 - val_accuracy: 0.9529
Epoch 26/30
50/50 [
] - 22s 435ms/step - loss: 0.1607 - accuracy: 0.9620 - val_loss: 0.1583 - val_accuracy: 0.9529
Epoch 27/30
50/50 [] - 22s 432ms/step - loss: 0.0971 - accuracy: 0.9759 - val_loss: 0.2410 - val_accuracy: 0.9005
Epoch 28/30
50/50 [
] - 22s 432ms/step - loss: 0.1715 - accuracy: 0.9671 - val_loss: 0.2937 - val_accuracy: 0.9215
Epoch 29/30
50/50 [] - 22s 434ms/step - loss: 0.0970 - accuracy: 0.9747 - val_loss: 0.1175 - val_accuracy: 0.9791
Epoch 30/30
50/50 [
] - 22s 434ms/step - loss: 0.0977 - accuracy: 0.9785 - val_loss: 0.1905 - val_accuracy: 0.9372
该循环程序运行时间: 713.7100174427032 s

Process finished with exit code 0

<div class="section-content"></div>

(2)emotion-domestic数据集(30轮)

D:softwareanacondaAnaenvs fjzhpython.exe D:codeCNNTensorflowvegetables_tf2.3-master rain_cnn.py
Found 49601 files belonging to 7 classes.
2023-11-29 20:48:25.502482: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-11-29 20:48:25.518757: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x28ac6bcea00 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2023-11-29 20:48:25.518874: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Found 5000 files belonging to 7 classes.
[‘anger’, ‘disgust’, ‘fear’, ‘happy’, ‘nature’, ‘sadness’, ‘surprise’]
Model: “sequential”


Layer (type) Output Shape Param #

rescaling (Rescaling) (None, 224, 224, 3) 0


conv2d (Conv2D) (None, 222, 222, 32) 896


max_pooling2d (MaxPooling2D) (None, 111, 111, 32) 0


conv2d_1 (Conv2D) (None, 109, 109, 64) 18496


max_pooling2d_1 (MaxPooling2 (None, 54, 54, 64) 0


flatten (Flatten) (None, 186624) 0


dense (Dense) (None, 128) 23888000


dense_1 (Dense) (None, 7) 903

Total params: 23,908,295
Trainable params: 23,908,295
Non-trainable params: 0


Epoch 1/30
3101/3101 [] - 1391s 449ms/step - loss: 1.2374 - accuracy: 0.5693 - val_loss: 1.0715 - val_accuracy: 0.6042
Epoch 2/30
3101/3101 [
] - 1353s 436ms/step - loss: 0.9286 - accuracy: 0.6806 - val_loss: 1.8194 - val_accuracy: 0.5418
Epoch 3/30
3101/3101 [] - 1306s 421ms/step - loss: 0.7953 - accuracy: 0.7252 - val_loss: 0.8003 - val_accuracy: 0.7296
Epoch 4/30
3101/3101 [
] - 1305s 421ms/step - loss: 0.6741 - accuracy: 0.7667 - val_loss: 1.3812 - val_accuracy: 0.6020
Epoch 5/30
3101/3101 [] - 1318s 425ms/step - loss: 0.5527 - accuracy: 0.8075 - val_loss: 0.7863 - val_accuracy: 0.7538
Epoch 6/30
3101/3101 [
] - 1296s 418ms/step - loss: 0.4364 - accuracy: 0.8484 - val_loss: 0.9826 - val_accuracy: 0.7176
Epoch 7/30
3101/3101 [] - 1298s 419ms/step - loss: 0.3332 - accuracy: 0.8836 - val_loss: 1.3544 - val_accuracy: 0.6356
Epoch 8/30
3101/3101 [
] - 1295s 418ms/step - loss: 0.2529 - accuracy: 0.9116 - val_loss: 1.0070 - val_accuracy: 0.7502
Epoch 9/30
3101/3101 [] - 1297s 418ms/step - loss: 0.1875 - accuracy: 0.9333 - val_loss: 1.1358 - val_accuracy: 0.7506
Epoch 10/30
3101/3101 [
] - 1297s 418ms/step - loss: 0.1447 - accuracy: 0.9495 - val_loss: 1.1523 - val_accuracy: 0.7606
Epoch 11/30
3101/3101 [] - 1296s 418ms/step - loss: 0.1139 - accuracy: 0.9609 - val_loss: 1.3589 - val_accuracy: 0.7508
Epoch 12/30
3101/3101 [
] - 1296s 418ms/step - loss: 0.1004 - accuracy: 0.9656 - val_loss: 1.3640 - val_accuracy: 0.7558
Epoch 13/30
3101/3101 [] - 1296s 418ms/step - loss: 0.0838 - accuracy: 0.9718 - val_loss: 1.7149 - val_accuracy: 0.7462
Epoch 14/30
3101/3101 [
] - 1294s 417ms/step - loss: 0.0677 - accuracy: 0.9774 - val_loss: 1.4375 - val_accuracy: 0.7628
Epoch 15/30
3101/3101 [] - 1296s 418ms/step - loss: 0.0508 - accuracy: 0.9834 - val_loss: 1.5858 - val_accuracy: 0.7548
Epoch 16/30
3101/3101 [
] - 1296s 418ms/step - loss: 0.0434 - accuracy: 0.9858 - val_loss: 1.6809 - val_accuracy: 0.7548
Epoch 17/30
3101/3101 [] - 1294s 417ms/step - loss: 0.0500 - accuracy: 0.9840 - val_loss: 1.5653 - val_accuracy: 0.7766
Epoch 18/30
3101/3101 [
] - 1295s 418ms/step - loss: 0.0348 - accuracy: 0.9884 - val_loss: 1.8096 - val_accuracy: 0.7544
Epoch 19/30
3101/3101 [] - 1293s 417ms/step - loss: 0.0347 - accuracy: 0.9885 - val_loss: 1.6763 - val_accuracy: 0.7708
Epoch 20/30
3101/3101 [
] - 1295s 418ms/step - loss: 0.0263 - accuracy: 0.9919 - val_loss: 1.8553 - val_accuracy: 0.7662
Epoch 21/30
3101/3101 [] - 1297s 418ms/step - loss: 0.0237 - accuracy: 0.9922 - val_loss: 1.9383 - val_accuracy: 0.7790
Epoch 22/30
3101/3101 [
] - 1295s 418ms/step - loss: 0.0170 - accuracy: 0.9946 - val_loss: 1.9647 - val_accuracy: 0.7724
Epoch 23/30
3101/3101 [] - 1297s 418ms/step - loss: 0.0198 - accuracy: 0.9939 - val_loss: 1.9236 - val_accuracy: 0.7760
Epoch 24/30
3101/3101 [
] - 1296s 418ms/step - loss: 0.0216 - accuracy: 0.9933 - val_loss: 1.9833 - val_accuracy: 0.7760
Epoch 25/30
3101/3101 [] - 1295s 418ms/step - loss: 0.0199 - accuracy: 0.9941 - val_loss: 1.9605 - val_accuracy: 0.7718
Epoch 26/30
3101/3101 [
] - 1297s 418ms/step - loss: 0.0192 - accuracy: 0.9939 - val_loss: 2.0310 - val_accuracy: 0.7664
Epoch 27/30
3101/3101 [] - 1298s 418ms/step - loss: 0.0215 - accuracy: 0.9938 - val_loss: 2.0891 - val_accuracy: 0.7704
Epoch 28/30
3101/3101 [
] - 1296s 418ms/step - loss: 0.0178 - accuracy: 0.9951 - val_loss: 1.9493 - val_accuracy: 0.7774
Epoch 29/30
3101/3101 [] - 1297s 418ms/step - loss: 0.0163 - accuracy: 0.9949 - val_loss: 2.0296 - val_accuracy: 0.7698
Epoch 30/30
3101/3101 [
] - 1300s 419ms/step - loss: 0.0106 - accuracy: 0.9968 - val_loss: 2.0050 - val_accuracy: 0.7874
该循环程序运行时间: 39095.540670871735 s

Process finished with exit code 0

(3)emotion-domestic数据集(10轮)

D:softwareanacondaAnaenvs fjzhpython.exe D:codeCNNTensorflowvegetables_tf2.3-master rain_cnn.py
Found 49601 files belonging to 7 classes.
2023-12-17 23:57:20.627275: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-12-17 23:57:20.664475: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1cca0c749b0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2023-12-17 23:57:20.664678: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Found 5000 files belonging to 7 classes.
[‘anger’, ‘disgust’, ‘fear’, ‘happy’, ‘nature’, ‘sadness’, ‘surprise’]
Model: “sequential”

Layer (type) Output Shape Param #

rescaling (Rescaling) (None, 224, 224, 3) 0


conv2d (Conv2D) (None, 222, 222, 32) 896


max_pooling2d (MaxPooling2D) (None, 111, 111, 32) 0


conv2d_1 (Conv2D) (None, 109, 109, 64) 18496


max_pooling2d_1 (MaxPooling2 (None, 54, 54, 64) 0


flatten (Flatten) (None, 186624) 0


dense (Dense) (None, 128) 23888000

dense_1 (Dense) (None, 7) 903

Total params: 23,908,295
Trainable params: 23,908,295
Non-trainable params: 0


Epoch 1/10
3101/3101 [] - 1555s 501ms/step - loss: 1.2040 - accuracy: 0.5820 - val_loss: 1.2013 - val_accuracy: 0.5476
Epoch 2/10
3101/3101 [
] - 1298s 418ms/step - loss: 0.9007 - accuracy: 0.6896 - val_loss: 1.4946 - val_accuracy: 0.5620
Epoch 3/10
3101/3101 [] - 1291s 416ms/step - loss: 0.7639 - accuracy: 0.7357 - val_loss: 0.7703 - val_accuracy: 0.7412
Epoch 4/10
3101/3101 [
] - 1289s 416ms/step - loss: 0.6323 - accuracy: 0.7803 - val_loss: 0.8768 - val_accuracy: 0.7074
Epoch 5/10
3101/3101 [] - 1289s 416ms/step - loss: 0.5101 - accuracy: 0.8245 - val_loss: 0.7505 - val_accuracy: 0.7566
Epoch 6/10
3101/3101 [
] - 1290s 416ms/step - loss: 0.3943 - accuracy: 0.8632 - val_loss: 0.8923 - val_accuracy: 0.7406
Epoch 7/10
3101/3101 [] - 1289s 416ms/step - loss: 0.2891 - accuracy: 0.8983 - val_loss: 3.0855 - val_accuracy: 0.3902
Epoch 8/10
3101/3101 [
] - 1288s 415ms/step - loss: 0.2198 - accuracy: 0.9219 - val_loss: 1.0140 - val_accuracy: 0.7594
Epoch 9/10
3101/3101 [] - 1288s 415ms/step - loss: 0.1618 - accuracy: 0.9442 - val_loss: 1.1667 - val_accuracy: 0.7512
Epoch 10/10
3101/3101 [
] - 1288s 415ms/step - loss: 0.1216 - accuracy: 0.9568 - val_loss: 1.1995 - val_accuracy: 0.7618
该循环程序运行时间: 13174.560583353043 s

Process finished with exit code 0

3、CNN模型训练过程曲线图

(1)CK+48数据集

<div class="section-content"></div>

上图为Accuracy(准确度):蓝线为训练准确度,橙线为验证(测试)准确度

下图为Cross Entropy(交叉熵):蓝线为训练损失,橙线为验证(测试)损失

(2)emotion-domestic数据集(30轮)

<div class="section-content"></div>

(3)emotion-domestic数据集(10轮)

<div class="section-content"></div>

4、CNN模型测试(验证)过程热力图

(1)CK+48数据集

<div class="section-content"></div>

(2)emotion-domestic数据集(30轮)

<div class="section-content"></div>

(3)emotion-domestic数据集(10轮)

<div class="section-content"></div>