粒子群优神经网络优化


毕设帮助,开题指导,资料分享,疑问解答(见文末)

前言

PSO-for-Neural-Nets

大家知道,使用反向传播对神经网络进行训练是非常有效的。但如果网络参数的初始值设得不好时,各位可能也有过训练十分缓慢的经历。

这里提供一种加快反向传播的算法,目的是在训练神经网络时不使用反向传播以及梯度下降算法,而是先使用粒子群优化算法(Particle Swarm Optimization,PSO)对网络参数进行初始化,之后可以再次使用反向传播对网络进行正式训练。

1 粒子群优化 PSO

粒子群优化是一种元启发式算法(meta-heuristics algorithm),它属于基于种群的元启发式方法的一个子类。这意味着将多个粒子放置在 n 维解空间中,让其不断移动以获得最优解。对于不熟悉粒子群优化的童鞋,可以网络上找资料快速了解一下。比如这篇,很容易理解。

在这里插入图片描述

2 神经网络

前馈神经网络: 前馈神经网络是两个操作(一个线性操作,然后是一个非线性操作)的堆栈,这些操作被多次应用以找出映射 。

在这里插入图片描述

在这里插入图片描述

3 将两者结合

在这里插入图片描述
在这里插入图片描述

import numpy as np
from swarm_intelligence.particle import Particle
from swarm_intelligence.pso import ParticleSwarmOptimizer
from matplotlib import pyplot as plt
mean_01 = np.array([1.0, 2.0])
mean_02 = np.array([-1.0, 4.0])

cov_01 = np.array([[1.0, 0.9], [0.9, 2.0]])
cov_02 = np.array([[2.0, 0.5], [0.5, 1.0]])

ds_01 = np.random.multivariate_normal(mean_01, cov_01, 250)
ds_02 = np.random.multivariate_normal(mean_02, cov_02, 250)

all_data = np.zeros((500, 3))
all_data[:250, :2] = ds_01
all_data[250:, :2] = ds_02
all_data[250:, -1] = 1

np.random.shuffle(all_data)

split = int(0.8 * all_data.shape[0])
x_train = all_data[:split, :2]
x_test = all_data[split:, :2]
y_train = all_data[:split, -1]
y_test = all_data[split:, -1]
def sigmoid(logit):
    return 1 / (1 + np.exp(-logit))

def fitness(w, X=x_train, y=y_train):
    logit  = w[0] <em> X[:, 0] + w[1] </em> X[:, 1] + w[2]
    preds = sigmoid(logit)

    return binary_cross_entropy(y, preds)

def binary_cross_entropy(y, y_hat):
    left = y * np.log(y_hat + 1e-7)
    right = (1 - y) * np.log((1 - y_hat) + 1e-7)
    return -np.mean(left + right)

pso = ParticleSwarmOptimizer(Particle, 0.1, 0.3, 30, fitness, 
                             lambda x, y: x<y, n_iter=100,
                             dims=3, random=True,
                             position_range=(0, 1), velocity_range=(0, 1))
pso.optimize()

print(pso.gbest, fitness(pso.gbest, x_test, y_test))
26%|██▌ | 26/100 [00:00<00:00, 125.34it/s]

1.1801928375606305
1.4209814927365876
1.6079804335787051
1.4045063665887232
1.6061883358646398
1.216230952537311
1.092492742843725
1.425740352398705
1.2316560685535152
0.9883386170699404
0.7872754467763685
1.2949776923674654
1.5335307808402896
1.4402299491203296
1.707301581201865
1.3663291698028996
0.810679674134304
0.902645267001228
...

0.6887999501032107
0.6888687686160592
0.688937050625055
0.6890767713425439
0.6892273324647994
0.6890875560305971
0.689124619992127
0.6898172338259064
0.6887098887333781
0.6887212601101861
0.688826316853767
0.6892384007287018
0.6844381943050638
0.689115638510458
0.6891453159612045
0.6901587770829556
0.6895998527186173
0.6890967086445332
0.689073485303836
0.6883588252450673
[0.00451265 0.21376644 0.22467216] 0.6875450245414241

粒子群优化算法属于元启发算法的一种,寻优往往需要额外的计算时间。并且,元启发算法对问题的维数相对敏感,运算复杂度会随着问题维数的规模增加很快,不利于像大型复杂神经网络复杂的寻优。

但是对比梯度下降方法,元启发也有很多优势,比如梯度下降也有一些问题,比如对初始条件敏感,如果初始条件好则收敛快,初始条件不好可能就不收敛。所以对于不是十分复杂的网络架构,或许元启发算法可以得到一个比较不错的初始化。

最后 - 技术解答 - 毕设帮助

This add-in to the PSO Research toolbox (Evers 2009) aims to allow an artificial neural network (ANN or simply NN) to be trained using the Particle Swarm Optimization (PSO) technique (Kennedy, Eberhart et al. 2001). This add-in acts like a bridge or interface between MATLAB’s NN toolbox and the PSO Research Toolbox. In this way, MATLAB’s NN functions can call the NN add-in, which in turn calls the PSO Research toolbox for NN training. This approach to training a NN by PSO treats each PSO particle as one possible solution of weight and bias combinations for the NN (Settles and Rylander ; Rui Mendes 2002; Venayagamoorthy 2003). The PSO particles therefore move about in the search space aiming to minimise the output of the NN performance function. The author acknowledges that there already exists code for PSO training of a NN (Birge 2005), however that code was found to work only with MATLAB version 2005 and older. This NN-addin works with newer versions of MATLAB till versions 2010a. HELPFUL LINKS: 1. This NN add-in only works when used with the PSORT found at, http://www.mathworks.com/matlabcentral/fileexchange/28291-particle-swarm-optimization-research-toolbox. 2. The author acknowledges the modification of code used in an old PSO toolbox for NN training found at http://www.mathworks.com.au/matlabcentral/fileexchange/7506. 3. User support and contact information for the author of this NN add-in can be found at http://www.tricia-rambharose.com/ ACKNOWLEDGEMENTS The author acknowledges the support of advisors and fellow researchers who supported in various ways to better her understanding of PSO and NN which lead to the creation of this add-in for PSO training of NNs. The acknowledged are as follows: * Dr. Alexander Nikov - Senior lecturer and Head of Usaility Lab, UWI, St. Augustine, Trinidad, W.I. http://www2.sta.uwi.edu/~anikov/ * Dr. Sabine Graf - Assistant Professor, Athabasca University, Alberta, Canada. http://scis.athabascau.ca/scis/staff/faculty.jsp?id=sabineg * Dr. Kinshuk - Professor, Athabasca University, Alberta, Canada. http://scis.athabascau.ca/scis/staff/faculty.jsp?id=kinshuk * Members of the iCore group at Athabasca University, Edmonton, Alberta, Canada.
粒子群优化算法是一种新颖的仿生、群智能优化算法。该算法原理简单、需调整的参数少、收敛速度快而且易于实现,因此近年来粒子群算法引起了广大学者的关注。然而到目前为止粒子群算法的在理论分析和实践应用方面尚未成熟,仍有大量的问题需进一步研究。本文针对粒子群算法易出现“早熟”陷入局部极小值问题对标准粒子群算法进行改进并将改进的粒子群算法应用于BP神经网络中。本文的主要工作如下:本文首先介绍了粒子群算法的国内外的研究现状与发展概况,较系统地分析了粒子群优化算法的基本理论,总结常见的改进的粒子群优化算法。其次介绍了Hooke-Jeeves模式搜索法的算法分析、基本流程及应用领域。针对标准粒子群优化算法存在“早熟”问题,易陷入局部极小值的缺点,本文对标准粒子群算法进行改进。首先将原始定义的初始种群划分为两个相同的子种群,采用基于适应度支配的思想分别将每个子种群划分为两个子集,Pareto子集和N_Pareto子集;然后将两个子群中的适应度较的两个Pareto子集合为新种群。Griewank和Rastrigin由于新种群的参数设置区别于标准粒子群算法的参数设置,新的粒子与标准种群中的粒子飞行轨迹不同,种群的探索范围扩大,从而使算法的全局搜索能力有所提高。 为平衡粒子群算法的全局寻能力和局部寻能力,提高粒子群算法的求解精度和效率,本文在新种群寻过程中引入具有强收敛能力Hooke-Jeeves搜索法,提出了IMPSO算法。雅文网www.lunwendingzhi.com,并用IMPSO算法对标准基准测试函数进行实验,将得到的实验结果并与标准粒子群算法对基准函数的实验结果进行对比,仿真结果证明了该改进的粒子群算法的有效性。 最后本文研究改进的粒子群算法在BP神经网络中的应用。首先介绍人工神经网络的原理及基于BP算法的多层前馈神经网络,其次用IMPSO算法训练BP神经网络并给出训练流程图。 将IMPSO算法训练的BP神经网络分别应用于齿轮热处理中硬化层深的预测以及用于柴油机的缸盖与缸壁的故障诊断中,并将预测结果、诊断结果与BP神经网络、标准粒子群优化算法训练的BP神经网络的实验结果进行对比,实验结果证明了改进的粒子群算法训练BP网络具有更强的优化性能和学习能力。 英文简介: Particle swarm optimization algorithm is a novel bionic, swarm intelligence optimization algorithm. The algorithm principle is simple, less need to adjust the parameters and convergence speed is fast and easy to implement, so in recent years, particle swarm optimization (pso) to cause the attention of many scholars. So far, however, the particle swarm algorithm are not mature in theory analysis and practice applications, there are still a lot of problems need further research. Based on particle swarm algorithm is prone to "premature" into a local minimum value problem to improve the standard particle swarm algorithm and improved particle swarm optimization (pso) algorithm was applied to BP neural network. This paper's main work is as follows: at first, this paper introduces the particle swarm algorithm in the general situation of the research status and development at home and abroad, systematically analyzes the basic theory of particle swarm optimization algorithm, summarizes the common improved particle swarm optimization algorithm. Secondly introduces the analysis method of Hooke - Jeeves pattern search algorithm, the basic process and application fields. In view of the standard particle swarm optimization algorithm "precocious" problems, easy to fall into local minimum value, in this paper, the standard particle swarm algorithm was improved. First of all, the original definition of the initial population is divided into two identical sub populations, based on the fitness of thought respectively each child population is divided into two subsets, and Pareto subset N_Pareto subset; And then has a better fitness of two subgroups of two Pareto set for the new population. Griewank and Rastrigin because of the new population parameter setting differs from the standard particle swarm algorithm of the parameter is set, the new particles and particle trajectories in the different standard population, population expanding, which makes the algorithm's global search ability have improved. To balance the global search capability of the particle swarm algorithm and local optimization ability, and improve the precision and efficiency of particle swarm optimization (pso) algorithm, introduced in this article in the new population optimization process has a strong convergence ability to search method of Hooke - Jeeves, IMPSO algorithm is proposed. And standard benchmark test functions with IMPSO algorithm experiment, will receive the results with the standard particle swarm algorithm, comparing the experimental results of benchmark functions, the simulation results prove the validity of the improved particle swarm algorithm. At the end of the paper research the improved particle swarm algorithm in the application of the BP neural network. First this paper introduces the principle of artificial neural network and based on the multi-layer feed-forward neural network BP algorithm, secondly by IMPSO algorithm training the BP neural network and training flow chart is given. IMPSO algorithm training the BP neural network respectively used in the gear heat treatment hardening layer depth prediction and used for fault diagnosis of diesel engine cylinder head and cylinder wall, and the predicted results, the diagnostic results, the standard particle swarm optimization algorithm with BP neural network of training BP neural network, comparing the experimental results of the experimental results show that the improved particle swarm optimization (pso) training BP network has better optimization performance and learning ability.
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值