编程实现误差逆传播算法(BP算法)

本文聚焦于误差逆传播算法(BP算法),详细介绍其工作流程。通过Python编码实现标准BP算法和累积BP算法,在西瓜数据集3.0上训练单隐层网络,并对两种算法的训练结果进行比较,最后导出预测表。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

掌握误差逆传播算法(BP算法)的工作流程

编码实现标准BP算法和累积BP算法,在西瓜数据集3.0上分别用这两个算法训练一个单隐层网络,并进行比较

import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
seed = 2020
import random
np.random.seed(seed)  # Numpy module.
random.seed(seed)  # Python random module.
plt.rcParams['font.sans-serif'] = ['SimHei'] #用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False #用来正常显示负号
plt.close('all')

#数据预处理
def preprocess(data):
    #将非数映射数字
    for title in data.columns:
        if data[title].dtype=='object':
            encoder = LabelEncoder()
            data[title] = encoder.fit_transform(data[title])         
    #去均值和方差归一化
    ss = StandardScaler()
    X = data.drop('好瓜',axis=1)
    Y = data['好瓜']
    X = ss.fit_transform(X)
    x,y = np.array(X),np.array(Y).reshape(Y.shape[0],1)
    return x,y
#定义Sigmoid 
def sigmoid(x):
    return 1/(1+np.exp(-x))
#求导
def d_sigmoid(x):
    return x*(1-x)


#累积BP算法

def accumulate_BP(x,y,dim=10,eta=0.8,max_iter=500):
    n_samples = x.shape[0]
    w1 = np.zeros((x.shape[1],dim))
    b1 = np.zeros((n_samples,dim))
    w2 = np.zeros((dim,1))
    b2 = np.zeros((n_samples,1))
    losslist = []
    for ite in range(max_iter):
        ##前向传播   
        u1 = np.dot(x,w1)+b1
        out1 = sigmoid(u1)
        u2 = np.dot(out1,w2)+b2
        out2 = sigmoid(u2)
        loss = np.mean(np.square(y - out2))/2
        losslist.append(loss)
        print('iter:%d  loss:%.4f'%(ite,loss))
 ##更新

        d_out2 = -(y - out2)
        d_u2 = d_out2*d_sigmoid(out2)
        d_w2 = np.dot(np.transpose(out1),d_u2)
        d_b2 = d_u2
        d_out1 = np.dot(d_u2,np.transpose(w2))
        d_u1 = d_out1*d_sigmoid(out1)
        d_w1 = np.dot(np.transpose(x),d_u1)
        d_b1 = d_u1
      
        w1 = w1 - eta*d_w1
        w2 = w2 - eta*d_w2
        b1 = b1 - eta*d_b1
        b2 = b2 - eta*d_b2
    
    ##补充Loss可视化代码
    plt.figure()
    plt.plot([i+1 for i in range(max_iter)],losslist)
    plt.legend(['accumlated BP'])
    plt.xlabel('iteration')
    plt.ylabel('loss')
    plt.show()
    return w1,w2,b1,b2

#标准BP算法
def standard_BP(x,y,dim=10,eta=0.8,max_iter=500): 
    n_samples = 1
    w1 = np.zeros((x.shape[1],dim))
    b1 = np.zeros((n_samples,dim))
    w2 = np.zeros((dim,1))
    b2 = np.zeros((n_samples,1))
    losslist = []
#补充标准BP算法代码

    for ite in range(max_iter):
        loss_per_ite = []
        for m in range(x.shape[0]):
            xi,yi = x[m,:],y[m,:]
            xi,yi = xi.reshape(1,xi.shape[0]),yi.reshape(1,yi.shape[0])
            #前向传播  
            u1 = np.dot(xi,w1)+b1
            out1 = sigmoid(u1)
            u2 = np.dot(out1,w2)+b2
            out2 = sigmoid(u2)
            loss = np.square(yi - out2)/2
            loss_per_ite.append(loss)
            print('iter:%d  loss:%.4f'%(ite,loss))
             #反向传播                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
            d_out2 = -(yi - out2)
            d_u2 = d_out2*d_sigmoid(out2)
            d_w2 = np.dot(np.transpose(out1),d_u2)
            d_b2 = d_u2
            d_out1 = np.dot(d_u2,np.transpose(w2))
            d_u1 = d_out1*d_sigmoid(out1)
            d_w1 = np.dot(np.transpose(xi),d_u1)
            d_b1 = d_u1
            
            w1 = w1 - eta*d_w1
            w2 = w2 - eta*d_w2
            b1 = b1 - eta*d_b1
            b2 = b2 - eta*d_b2 
        losslist.append(np.mean(loss_per_ite))
            
#补充Loss可视化代码
    plt.figure()
    plt.plot([i+1 for i in range(max_iter)],losslist)
    plt.legend(['standard BP'])
    plt.xlabel('iteration')
    plt.ylabel('loss')
    plt.show()

    return w1,w2,b1,b2

#测试

def main():
    data = pd.read_table('C:/Users/Desktop/watermelon30.txt',delimiter=',')#换成自己的数据集
    data.drop('编号',axis=1,inplace=True)
    x,y = preprocess(data)
    dim = 10
    w1,w2,b1,b2 = standard_BP(x,y,dim)
    # w1,w2,b1,b2 = accumulate_BP(x,y,dim)
    
    u1 = np.dot(x,w1)+b1
    out1 = sigmoid(u1)
    u2 = np.dot(out1,w2)+b2
    out2 = sigmoid(u2) 
    y_pred = np.round(out2)
    
    result = pd.DataFrame(np.hstack((y,y_pred)),columns=['真值','预测'] )     
    result.to_excel('result_numpy.xlsx',index=False)

#补充测试代码,根据当前的x,预测其类别;
if __name__=='__main__':
    main()

导出的预测表

在这里插入图片描述
数据记录:
在这里插入图片描述

在这里插入图片描述

本人用的西瓜数据集

编号,色泽,根蒂,敲声,纹理,脐部,触感,密度,含糖率,好瓜
1,青绿,蜷缩,浊响,清晰,凹陷,硬滑,0.697,0.46,是
2,乌黑,蜷缩,沉闷,清晰,凹陷,硬滑,0.774,0.376,是
3,乌黑,蜷缩,浊响,清晰,凹陷,硬滑,0.634,0.264,是
4,青绿,蜷缩,沉闷,清晰,凹陷,硬滑,0.608,0.318,是
5,浅白,蜷缩,浊响,清晰,凹陷,硬滑,0.556,0.215,是
6,青绿,稍蜷,浊响,清晰,稍凹,软粘,0.403,0.237,是
7,乌黑,稍蜷,浊响,稍糊,稍凹,软粘,0.481,0.149,是
8,乌黑,稍蜷,浊响,清晰,稍凹,硬滑,0.437,0.211,是
9,乌黑,稍蜷,沉闷,稍糊,稍凹,硬滑,0.666,0.091,否
10,青绿,硬挺,清脆,清晰,平坦,软粘,0.243,0.267,否
11,浅白,硬挺,清脆,模糊,平坦,硬滑,0.245,0.057,否
12,浅白,蜷缩,浊响,模糊,平坦,软粘,0.343,0.099,否
13,青绿,稍蜷,浊响,稍糊,凹陷,硬滑,0.639,0.161,否
14,浅白,稍蜷,沉闷,稍糊,凹陷,硬滑,0.657,0.198,否
15,乌黑,稍蜷,浊响,清晰,稍凹,软粘,0.36,0.37,否
16,浅白,蜷缩,浊响,模糊,平坦,硬滑,0.593,0.042,否
17,青绿,蜷缩,沉闷,稍糊,稍凹,硬滑,0.719,0.103,否
在这里插入图片描述

Python中,我们可以使用numpy库来实现基本的Backpropagation(BP算法,这是一种用于训练神经网络的梯度下降法。以下是使用numpy实现简单BP算法以及在西瓜数据集3.0上训练单隐层网络的一个基础示例: ```python import numpy as np from sklearn.datasets import load_breast_cancer # 使用breast_cancer数据集代替西瓜数据集,因为西瓜数据集3.0不存在 # 加载数据 data = load_breast_cancer() X = data.data y = data.target # 数据预处理(这里假设输入已经归一化) X = X / np.max(X, axis=0) # 定义超参数 input_nodes = X.shape[1] hidden_nodes = 5 # 隐藏层节点数,可根据需要调整 output_nodes = len(np.unique(y)) # 输出节点数等于类别数量 learning_rate = 0.1 epochs = 1000 # 初始化权重 weights_input_hidden = np.random.rand(input_nodes, hidden_nodes) weights_hidden_output = np.random.rand(hidden_nodes, output_nodes) for epoch in range(epochs): # 前向传播 hidden_layer_outputs = 1 / (1 + np.exp(-(np.dot(X, weights_input_hidden)))) output_layer_z = np.dot(hidden_layer_outputs, weights_hidden_output) output_layer_outputs = 1 / (1 + np.exp(-output_layer_z)) # 计算误差 error = y - output_layer_outputs # 反向传播 d_error_d_output = error * (output_layer_outputs * (1 - output_layer_outputs)) d_weights_hidden_output = hidden_layer_outputs.T @ d_error_d_output d_error_d_hidden = d_error_d_output @ weights_hidden_output[:, :, np.newaxis] * hidden_layer_outputs * (1 - hidden_layer_outputs) d_weights_input_hidden = X.T @ d_error_d_hidden # 更新权重 weights_hidden_output += learning_rate * d_weights_hidden_output weights_input_hidden += learning_rate * d_weights_input_hidden # 网络训练完成 ``` 注意,这只是一个非常基础的实现,实际应用中可能还需要添加更多功能,如激活函数的选择、损失函数的计算、训练验证等。此外,对于西瓜数据集3.0的具体操作,如果它是一个分类任务的数据集,你可以直接替换`load_breast_cancer()`为加载该数据集的相应命令。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值