1 简介



2 部分代码
function [fbst, xbst, performance] = hho( objective, d, lmt, n, T, S)%Harris hawks optimization algorithm% inputs:% objective - function handle, the objective function% d - scalar, dimension of the optimization problem% lmt - d-by-2 matrix, lower and upper constraints of the decision varable% n - scalar, swarm size% T - scalar, maximum iteration% S - scalar, times of independent runs% data: 2021-05-09% author: elkman, github.com/ElkmanY/%% Levy flightbeta = 1.5;sigma = ( gamma(1+beta)*sin(pi*beta/2)/gamma((1+beta)/2)*beta*2^((beta-1)/2) ).^(1/beta);Levy = @(x) 0.01*normrnd(0,1,d,x)*sigma./abs(normrnd(0,1,d,x)).^(1/beta);%% algorithm proceduretic;for s = 1:S%% InitializationX = lmt(:,1) + (lmt(:,2) - lmt(:,1)).*rand(d,n);for t = 1:TF = objective(X);[f_rabbit(s,t), i_rabbit] = min(F);x_rabbit(:,t,s) = X(:,i_rabbit);xr = x_rabbit(:,t,s);J = 2*(1-rand(d,1));E0 = 2*rand(1,n)-1;E(t,:) = 2*E0*(1-t/T);absE = abs(E(t));p1 = absE>=1; %eq(1)r = rand(1,n);p2 = (r>=0.5) & (absE>=0.5) & (absE<1); %eq(4)p3 = (r>=0.5) & (absE<0.5); %eq(6)p4 = (r<0.5) & (absE>=0.5) & (absE<1); %eq(10)p5 = (r<0.5) & (absE<0.5); %eq(11)%% update locationsrh = randi([1,n],1,n);flag1 = rand(1,n)>=0.5;Y = xr - E(t,:).*abs( J.*xr - X );Z = Y + rand(d,n).*Levy(n);flag2 = (objective(Y)<objective(Z)) & (objective(Y)<F);flag3 = (objective(Y)>objective(Z)) & (objective(Z)<F);flag4 = (~flag2) & (~flag3);X_ = p1.*( (X(:,rh) - rand(1,n).*abs( X(:,rh) - 2*rand(1,n).*X )).*flag1 +...((X(:,rh) - mean(X)) - rand(1,n).*( lmt(:,1) + (lmt(:,2) - lmt(:,1)).*rand(d,n) )).*(~flag1) )...+ p2.*( xr - X - E(t,:).*abs( J.*xr - X ) )...+ p3.*( xr - E(t,:).*abs( xr - X ) )...+ p4.*( Y.*flag2 + Z.*flag3 + ( lmt(:,1) + (lmt(:,2) - lmt(:,1)).*rand(d,n) ).*flag4 )...+ p5.*( Y.*flag2 + Z.*flag3 + ( lmt(:,1) + (lmt(:,2) - lmt(:,1)).*rand(d,n) ).*flag4 );X_(:,i_rabbit) = xr;X = X_;endend%% Êä³ö-outputsperformance = [min(f_rabbit(:,T));mean(f_rabbit(:,T));std(f_rabbit(:,T))];timecost = toc;[fbst, ibst] = min(f_rabbit(:,T));xbst = x_rabbit(:,T,ibst);%% »æÍ¼-plot data% Convergence Curvefigure('Name','Convergence Curve');box onsemilogy(1:T,mean(f_rabbit,1),'b','LineWidth',1.5);xlabel('Iteration','FontName','Aril');ylabel('Fitness/Score','FontName','Aril');title('Convergence Curve','FontName','Aril');if d == 2% Trajectory of Global Optimalfigure('Name','Trajectory of Global Optimal');x1 = linspace(lmt(1,1),lmt(1,2));x2 = linspace(lmt(2,1),lmt(2,2));[X1,X2] = meshgrid(x1,x2);V = reshape(objective([X1(:),X2(:)]'),[size(X1,1),size(X1,1)]);contour(X1,X2,log10(V),100); % notice log10(V)hold onplot(x_rabbit(1,:,1),x_rabbit(2,:,1),'r-x','LineWidth',1);hold offxlabel('\it{x}_1','FontName','Time New Roman');ylabel('\it{x}_2','FontName','Time New Roman');title('Trajectory of Global Optimal','FontName','Aril');endend
3 仿真结果

4 参考文献
[1]孙伟, and 戴逸松. "BP算法的改进及用模拟电路实现的神经网络分类器." 数据采集与处理 9.3(1994):7.
博主简介:擅长智能优化算法、神经网络预测、信号处理、元胞自动机、图像处理、路径规划、无人机等多种领域的Matlab仿真,相关matlab代码问题可私信交流。
部分理论引用网络文献,若有侵权联系博主删除。
本文介绍了Harrishawk优化算法的MATLAB实现,包括函数定义、代码流程、仿真实验以及结果展示。通过实例演示了算法在二维优化问题上的应用,并展示了全局最优解的轨迹图。作者还分享了算法在智能优化领域的应用和相关技术背景。
2194

被折叠的 条评论
为什么被折叠?



