融合黄金正弦和随机游走的哈里斯鹰优化算法

87 篇文章 ¥59.90 ¥99.00
本文介绍了融合黄金正弦函数和随机游走策略的哈里斯鹰优化算法,该算法结合鹰群捕食行为,通过MATLAB实现,增强了全局搜索能力和收敛速度。代码示例中,定义了优化问题结构体,初始化鹰群,迭代优化,更新个体位置,利用黄金正弦权重和随机游走更新策略,并进行边界处理,以提高优化效果。

哈里斯鹰优化算法(Harris’ Hawk Optimization Algorithm)是一种基于鹰群捕食行为的启发式优化算法。该算法模拟了鹰群中的协作与竞争,通过个体之间的合作和竞争来搜索最优解。为了增强算法的全局搜索能力和收敛速度,可以将黄金正弦函数和随机游走策略融合进哈里斯鹰优化算法中。

下面是使用MATLAB实现融合黄金正弦和随机游走的哈里斯鹰优化算法的代码示例:

function [bestSolution, bestFitness] = HHO_GoldenSine_RW(problem, maxIterations, populationSize)
    % 参数初始化
    lb 
clear clc close all Function_name='F13'; % 测试函数编号(F1~F23) [lb,ub,dim,fobj]=Get_Functions_details(Function_name); % 获取目标函数对应参数 SearchAgents_no=30; %种群规模 Max_iteration=500; %最大迭代次数 %% 模型训练 for i = 1:80 % 次数可以修改运行次数 disp(['第',num2str(i),'次实验']); num1 = 5; % 头部混沌变异改进方案选择:1-10,tent、Logistic、Cubic、chebyshev、Piecewise、sinusoidal、Sine,ICMIC, Circle,Bernoulli num2 = 3; % 身体融合变异改进方案选择:1-15,高斯变异,t分布扰动,自适应t分布,柯西变异扰动,差分变异扰动,高斯随机游走,莱维飞行,三角形游走,螺旋飞行,黄金正弦,正余弦,透镜成像反向学习,纵横交叉,动态反向学习,随机游走 num3 = 11; % 尾部拼接变异改进方法选择: [Best_score(i,:),Best_pos(i,:),cg_curve(i,:)]=ESOA(num1,num2,num3,SearchAgents_no,Max_iteration,lb,ub,dim,fobj); % 改进后白鹭群优化算法 num1 = 0; % 无改进 num2 = 0; % 无改进 num3 = 0; % 无改进 [Best_score1(i,:),Best_pos1(i,:),cg_curve1(i,:)]=ESOA(num1,num2,num3,SearchAgents_no,Max_iteration,lb,ub,dim,fobj); % 白鹭群优化算法 [Best_score2(i,:),Best_pos2(i,:),cg_curve2(i,:)]=ALO(num1,num2,num3,SearchAgents_no,Max_iteration,lb,ub,dim,fobj); % 蚁狮优化算法 [Best_score3(i,:),Best_pos3(i,:),cg_curve3(i,:)]=PSO(num1,num2,num3,SearchAgents_no,Max_iteration,lb,ub,dim,fobj); % 粒子群优化算法 [Best_score4(i,:),Best_pos4(i,:),cg_curve4(i,:)]=HHO(num1,num2,num3,SearchAgents_no,Max_iteration,lb,ub,dim,fobj); % 哈里斯优化算法 [Best_score5(i,:),Best_pos5(i,:),cg_curve5(i,:)]=GWO(num1,num2,num3,SearchAgents_no,Max_iteration,lb,ub,dim,fobj); % 灰狼优化算法 [Best_score6(i,:),Best_pos6(i,:),cg_curve6(i,:)]=HLOA(num1,num2,num3,SearchAgents_no,Max_iteration,lb,ub,dim,fobj); % 角蜥蜴优化算法 [Best_score7(i,:),Best_pos7(i,:),cg_curve7(i,:)]=GOOSE(num1,num2,num3,SearchAgents_no,Max_iteration,lb,ub,dim,fobj); % 鹅优化算法 end %% 结果绘图 figure('Position', [500 500 350 500]) %绘制收敛曲线 CNT=20; % 绘制图形随机选点的数量(可自行设置) k=round(linspace(1,Max_iteration,CNT)); %随机选CNT个点 iter=1:1:Max_iteration; Cg_curve=mean(cg_curve); Cg_curve1=mean(cg_curve1); Cg_curve2=mean(cg_curve2); Cg_curve3=mean(cg_curve3); Cg_curve4=mean(cg_curve4); Cg_curve5=mean(cg_curve5); Cg_curve6=mean(cg_curve6); Cg_curve7=mean(cg_curve7); semilogy(iter(k),Cg_curve(k),'-*','LineWidth',1) hold on semilogy(iter(k),Cg_curve1(k),'-p','LineWidth',1) hold on semilogy(iter(k),Cg_curve2(k),'-d','LineWidth',1) hold on semilogy(iter(k),Cg_curve3(k),'-s','LineWidth',1) hold on semilogy(iter(k),Cg_curve4(k),'-v','LineWidth',1) hold on semilogy(iter(k),Cg_curve5(k),'-+','LineWidth',1) hold on semilogy(iter(k),Cg_curve6(k),'-h','LineWidth',1) hold on semilogy(iter(k),Cg_curve7(k),'-x','LineWidth',1) legend('IESOA','ESOA','ALO','PSO','HHO','GWO','HLOA','GOOSE','Location','best') grid off xlabel('Number of iterations') ylabel('Objective function value') title('F1 Comparison of evolution curves of each optimization algorithm') %% 输出结果 disp('======IESOA算法结果=========='); display(['IESOA算法100次实验最优适应度值(Best) : ', num2str(min(Best_score))]); display(['IESOA算法100次实验最优解对应的平均适应度值(mean) : ', num2str(mean(Best_score))]); display(['IESOA算法100次实验最差适应度值(wrost) : ', num2str(max(Best_score))]); display(['IESOA算法100次实验标准差(std) : ', num2str(std(Best_score))]); disp('======ESOA算法结果============'); display(['ESOA算法100次实验最优适应度值(Best) : ', num2str(min(Best_score1))]); display(['ESOA算法100次实验最优解对应的平均适应度值(mean) : ', num2str(mean(Best_score1))]); display(['ESOA算法100次实验最差适应度值(wrost) : ', num2str(max(Best_score1))]); display(['ESOA算法100次实验标准差(std) : ', num2str(std(Best_score1))]); disp('======ALO算法结果=========='); display(['ALO算法100次实验最优适应度值(Best) : ', num2str(min(Best_score2))]); display(['ALO算法100次实验最优解对应的平均适应度值(mean) : ', num2str(mean(Best_score2))]); display(['ALO算法100次实验最差适应度值(wrost) : ', num2str(max(Best_score2))]); display(['ALO算法100次实验标准差(std) : ', num2str(std(Best_score2))]); disp('======PSO算法结果=========='); display(['PSO算法100次实验最优适应度值(Best) : ', num2str(min(Best_score3))]); display(['PSO算法100次实验最优解对应的平均适应度值(mean) : ', num2str(mean(Best_score3))]); display(['PSO算法100次实验最差适应度值(wrost) : ', num2str(max(Best_score3))]); display(['PSO算法100次实验标准差(std) : ', num2str(std(Best_score3))]); disp('======HHO算法结果=========='); display(['HHO算法100次实验最优适应度值(Best) : ', num2str(min(Best_score4))]); display(['HHO算法100次实验最优解对应的平均适应度值(mean) : ', num2str(mean(Best_score4))]); display(['HHO算法100次实验最差适应度值(wrost) : ', num2str(max(Best_score4))]); display(['HHO算法100次实验标准差(std) : ', num2str(std(Best_score4))]); disp('======GWO算法结果=========='); display(['GWO算法100次实验最优适应度值(Best) : ', num2str(min(Best_score5))]); display(['GWO算法100次实验最优解对应的平均适应度值(mean) : ', num2str(mean(Best_score5))]); display(['GWO算法100次实验最差适应度值(wrost) : ', num2str(max(Best_score5))]); display(['GWO算法100次实验标准差(std) : ', num2str(std(Best_score5))]); disp('====HLOA算法结果=========='); display(['HLOA算法100次实验最优适应度值(Best) : ', num2str(min(Best_score6))]); display(['HLOA算法100次实验最优解对应的平均适应度值(mean) : ', num2str(mean(Best_score6))]); display(['HLOA算法100次实验最差适应度值(wrost) : ', num2str(max(Best_score6))]); display(['HLOA算法100次实验标准差(std) : ', num2str(std(Best_score6))]); disp('=====GOOSE算法结果=========='); display(['GOOSE算法100次实验最优适应度值(Best) : ', num2str(min(Best_score7))]); display(['GOOSE算法100次实验最优解对应的平均适应度值(mean) : ', num2str(mean(Best_score7))]); display(['GOOSE算法100次实验最差适应度值(wrost) : ', num2str(max(Best_score7))]); display(['GOOSE算法100次实验标准差(std) : ', num2str(std(Best_score7))]); 以上为主函数 main_100 以下为原ESOA函数 function [y_global_best, x_global_best, Convergence_curve]=ESOA(num1,num2,num3,SearchAgents_no, Max_iter, lb, ub, dim, fobj) %% 运行群智能 func = fobj; beta1 = 0.9; beta2 = 0.99; %Initialize the positions of search agents初始化搜索代理位置 lb=lb.*ones(1,dim); ub=ub.*ones(1,dim); %% 改进方式1:头部变异|混沌初始化 x=repmat(lb, SearchAgents_no, 1)+chaos(num1,SearchAgents_no,dim) .* repmat((ub-lb), SearchAgents_no, 1); Convergence_curve=zeros(1,Max_iter); w = random('Uniform', -1, 1, SearchAgents_no, dim); %g = random('Uniform', -1, 1, SearchAgents_no, dim); m = zeros(SearchAgents_no, dim); v = zeros(SearchAgents_no, dim); y = zeros(SearchAgents_no,1); for i=1:SearchAgents_no y(i) = func(x(i,:)); end p_y = y; x_hist_best = x; g_hist_best = x; y_hist_best = ones(SearchAgents_no)*inf; x_global_best = x(1, :); g_global_best = zeros(1, dim); y_global_best = func(x_global_best); hop = ub - lb; l=0;% Loop counter %% 改进方式2:融合变异 % Main loop while l<Max_iter for i=1:SearchAgents_no if rand < 1-sqrt(l/Max_iter) && num2 ~= 0 % 满足条件执行变异 x =integration(x,i,x_global_best,num2,l,Max_iter,lb,ub); else p_y(i) = sum(w(i, :) .* x(i, :)); p = p_y(i) - y(i); g_temp = p.*x(i, :); % Indivual Direction p_d = x_hist_best(i, :) - x(i, :); f_p_bias = y_hist_best(i) - y(i); p_d = p_d .* f_p_bias; p_d = p_d ./ ((sum(p_d)+eps).*(sum(p_d)+eps)); d_p = p_d + g_hist_best(i, :); % Group Direction c_d = x_global_best - x(i, :); f_c_bias = y_global_best - y(i); c_d = c_d .* f_c_bias; c_d = c_d ./ ((sum(c_d)+eps).*(sum(c_d)+eps)); d_g = c_d + g_global_best; % Gradient Estimation r1 = rand(1, dim); r2 = rand(1, dim); g = (1 - r1 - r2).*g_temp + r1 .* d_p + r2 .* d_g; g = g ./ (sum(g) + eps); m(i,:) = beta1.*m(i,:)+(1-beta1).*g; v(i,:) = beta2*v(i,:)+(1-beta2)*g.^2; w(i,:) = w(i,:) - m(i,:)/(sqrt(v(i,:))+eps); % Advice Forward x_o = x(i, :) + exp(-l/(0.1*Max_iter)) * 0.1 .* hop .* g; Flag4ub=x_o>ub; Flag4lb=x_o<lb; x_o = (x_o.*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb; y_o = func(x_o); % Random Search r = random('Uniform', -pi/2, pi/2, 1, dim); x_n = x(i, :) + tan(r) .* hop/(1 + l) * 0.5; Flag4ub=x_n>ub; Flag4lb=x_n<lb; x_n = (x_n.*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb; y_n = func(x_n); % Encircling Mechanism d = x_hist_best(i, :) - x(i, :); d_g = x_global_best - x(i, :); r1 = rand(1, dim); r2 = rand(1, dim); x_m = (1-r1-r2).*x(i, :) + r1.*d + r2.*d_g; Flag4ub=x_m>ub; Flag4lb=x_m<lb; x_m = (x_m.*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb; y_m = func(x_m); % Discriminant Condition x_summary = [x_m; x_n; x_o]; y_summary = [y_m, y_n, y_o]; y_summary(isnan(y_summary)) = inf; ind = y_summary==min(y_summary); y_i = min(y_summary); x_i = x_summary(ind, :); x_i = x_i(1, :); if y_i < y(i) y(i) = y_i; x(i, :) = x_i; if y_i < y_hist_best(i) y_hist_best(i) = y_i; x_hist_best(i, :) = x_i; g_hist_best(i, :) = g_temp; if y_i < y_global_best y_global_best = y_i; x_global_best = x_i; g_global_best = g_temp; end end else if rand()<0.3 y(i) = y_i; x(i, :) = x_i; end end end end %% 改进方式3:尾部变异 if num3 ~= 0 for i=1:size(x,1) NEW_pos =integration(x,i,x_global_best,num3,l,Max_iter,lb,ub); Flag4ub=NEW_pos(i,:)>ub; Flag4lb=NEW_pos(i,:)<lb; NEW_pos(i,:)=(NEW_pos(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb; fitness= fobj(NEW_pos(i,:)); if(y(i)>fitness) y(i)=fitness; x(i,:) = NEW_pos(i,:); end if(y_global_best>fitness) y_global_best=fitness; x(i,:) = NEW_pos(i,:); x_global_best=x(i,:); end end end l=l+1; % fprintf("%d, %f\n", l, y_global_best) Convergence_curve(l) = y_global_best; end end 现在这个函数运行后显示 random 需要以下项之一: SimBiology Statistics and Machine Learning Toolbox 出错 ESOA (第 21 行) w = random('Uniform', -1, 1, SearchAgents_no, dim); ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 出错 main_100 (第 16 行) [Best_score(i,:),Best_pos(i,:),cg_curve(i,:)]=ESOA(num1,num2,num3,SearchAgents_no,Max_iteration,lb,ub,dim,fobj); % 改进后白鹭群优化算法 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 请对主函数进行修改,以保证正常运行
09-12
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值