✅作者简介:热爱科研的Matlab仿真开发者,修心和技术同步精进,matlab项目合作可私信。
🍎个人主页:Matlab科研工作室
🍊个人信条:格物致知。
更多Matlab仿真内容点击👇
⛄ 内容介绍
在机器学习领域,数据分类是一个重要的任务。为了提高分类算法的性能,研究人员一直在不断探索新的方法和技术。核极限学习机(KELM)是一种强大的分类算法,它在处理大规模数据集时表现出色。本文将介绍一种基于哈里斯算法优化的核极限学习机,称为HHO-KELM,以实现更准确和高效的数据分类。
首先,让我们简要了解一下核极限学习机(KELM)。KELM是一种基于极限学习机(ELM)的分类算法,它使用核函数来处理非线性分类问题。ELM是一种单层前馈神经网络,它以随机方式初始化输入层到隐藏层的连接权重,并通过最小二乘法来计算输出层到隐藏层的权重。KELM通过引入核函数,可以将ELM扩展到非线性分类问题。
然而,传统的KELM算法在处理大规模数据集时存在一些问题。由于KELM需要计算输入数据与隐藏层神经元之间的权重,当数据集非常庞大时,计算复杂度会显著增加。为了解决这个问题,研究人员提出了一种基于哈里斯算法的优化方法,即HHO-KELM。
哈里斯算法是一种基于自然进化的优化算法,它模拟了自然界中物种的进化过程。在HHO-KELM中,哈里斯算法被用来优化KELM中的权重计算过程。通过引入哈里斯算法,HHO-KELM能够更快地找到最优的权重,并且在处理大规模数据集时具有更好的性能。
HHO-KELM的基本思想是通过迭代地更新权重来最小化预测误差。在每次迭代中,哈里斯算法根据适应度函数对权重进行调整,并根据优化目标来更新权重。通过不断迭代,HHO-KELM能够逐渐优化权重,从而实现更准确和高效的数据分类。
实验结果表明,HHO-KELM在处理大规模数据集时具有显著优势。与传统的KELM算法相比,HHO-KELM能够更快地收敛,并且在准确性上表现更好。这使得HHO-KELM成为处理大规模数据分类问题的有力工具。
总结起来,基于哈里斯算法优化的核极限学习机HHO-KELM是一种强大的数据分类算法。它通过引入哈里斯算法来优化KELM中的权重计算过程,从而实现更准确和高效的数据分类。未来,我们可以进一步研究和改进HHO-KELM算法,以适应更复杂和多样化的数据分类任务。
⛄ 核心代码
% Developed in MATLAB R2013b% Source codes demo version 1.0% _____________________________________________________% Main paper:% Harris hawks optimization: Algorithm and applications% Ali Asghar Heidari, Seyedali Mirjalili, Hossam Faris, Ibrahim Aljarah, Majdi Mafarja, Huiling Chen% Future Generation Computer Systems,% DOI: https://doi.org/10.1016/j.future.2019.02.028% https://www.sciencedirect.com/science/article/pii/S0167739X18313530% _____________________________________________________% You can run the HHO code online at codeocean.com https://doi.org/10.24433/CO.1455672.v1% You can find the HHO code at https://github.com/aliasghar68/Harris-hawks-optimization-Algorithm-and-applications-.git% _____________________________________________________% Author, inventor and programmer: Ali Asghar Heidari,% PhD research intern, Department of Computer Science, School of Computing, National University of Singapore, Singapore% Exceptionally Talented Ph. DC funded by Iran's National Elites Foundation (INEF), University of Tehran% 03-03-2019% Researchgate: https://www.researchgate.net/profile/Ali_Asghar_Heidari% e-Mail: as_heidari@ut.ac.ir, aliasghar68@gmail.com,% e-Mail (Singapore): aliasgha@comp.nus.edu.sg, t0917038@u.nus.edu% _____________________________________________________% Co-author and Advisor: Seyedali Mirjalili%% e-Mail: ali.mirjalili@gmail.com% seyedali.mirjalili@griffithuni.edu.au%% Homepage: http://www.alimirjalili.com% _____________________________________________________% Co-authors: Hossam Faris, Ibrahim Aljarah, Majdi Mafarja, and Hui-Ling Chen% Homepage: http://www.evo-ml.com/2019/03/02/hho/% _____________________________________________________%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Harris's hawk optimizer: In this algorithm, Harris' hawks try to catch the rabbit.% T: maximum iterations, N: populatoin size, CNVG: Convergence curve% To run HHO: [Rabbit_Energy,Rabbit_Location,CNVG]=HHO(N,T,lb,ub,dim,fobj)function [Rabbit_Energy,Rabbit_Location,CNVG]=HHO(N,T,lb,ub,dim,fobj)disp('HHO is now tackling your problem')tic% initialize the location and Energy of the rabbitRabbit_Location=zeros(1,dim);Rabbit_Energy=inf;%Initialize the locations of Harris' hawksX=initialization(N,dim,ub,lb);CNVG=zeros(1,T);t=0; % Loop counterwhile t<Tfor i=1:size(X,1)% Check boundriesFU=X(i,:)>ub;FL=X(i,:)<lb;X(i,:)=(X(i,:).*(~(FU+FL)))+ub.*FU+lb.*FL;% fitness of locationsfitness=fobj(X(i,:));% Update the location of Rabbitif fitness<Rabbit_EnergyRabbit_Energy=fitness;Rabbit_Location=X(i,:);endendE1=2*(1-(t/T)); % factor to show the decreaing energy of rabbit% Update the location of Harris' hawksfor i=1:size(X,1)E0=2*rand()-1; %-1<E0<1Escaping_Energy=E1*(E0); % escaping energy of rabbitif abs(Escaping_Energy)>=1%% Exploration:% Harris' hawks perch randomly based on 2 strategy:q=rand();rand_Hawk_index = floor(N*rand()+1);X_rand = X(rand_Hawk_index, :);if q<0.5% perch based on other family membersX(i,:)=X_rand-rand()*abs(X_rand-2*rand()*X(i,:));elseif q>=0.5% perch on a random tall tree (random site inside group's home range)X(i,:)=(Rabbit_Location(1,:)-mean(X))-rand()*((ub-lb)*rand+lb);endelseif abs(Escaping_Energy)<1%% Exploitation:% Attacking the rabbit using 4 strategies regarding the behavior of the rabbit%% phase 1: surprise pounce (seven kills)% surprise pounce (seven kills): multiple, short rapid dives by different hawksr=rand(); % probablity of each eventif r>=0.5 && abs(Escaping_Energy)<0.5 % Hard besiegeX(i,:)=(Rabbit_Location)-Escaping_Energy*abs(Rabbit_Location-X(i,:));endif r>=0.5 && abs(Escaping_Energy)>=0.5 % Soft besiegeJump_strength=2*(1-rand()); % random jump strength of the rabbitX(i,:)=(Rabbit_Location-X(i,:))-Escaping_Energy*abs(Jump_strength*Rabbit_Location-X(i,:));end%% phase 2: performing team rapid dives (leapfrog movements)if r<0.5 && abs(Escaping_Energy)>=0.5, % Soft besiege % rabbit try to escape by many zigzag deceptive motionsJump_strength=2*(1-rand());X1=Rabbit_Location-Escaping_Energy*abs(Jump_strength*Rabbit_Location-X(i,:));if fobj(X1)<fobj(X(i,:)) % improved move?X(i,:)=X1;else % hawks perform levy-based short rapid dives around the rabbitX2=Rabbit_Location-Escaping_Energy*abs(Jump_strength*Rabbit_Location-X(i,:))+rand(1,dim).*Levy(dim);if (fobj(X2)<fobj(X(i,:))), % improved move?X(i,:)=X2;endendendif r<0.5 && abs(Escaping_Energy)<0.5, % Hard besiege % rabbit try to escape by many zigzag deceptive motions% hawks try to decrease their average location with the rabbitJump_strength=2*(1-rand());X1=Rabbit_Location-Escaping_Energy*abs(Jump_strength*Rabbit_Location-mean(X));if fobj(X1)<fobj(X(i,:)) % improved move?X(i,:)=X1;else % Perform levy-based short rapid dives around the rabbitX2=Rabbit_Location-Escaping_Energy*abs(Jump_strength*Rabbit_Location-mean(X))+rand(1,dim).*Levy(dim);if (fobj(X2)<fobj(X(i,:))), % improved move?X(i,:)=X2;endendend%%endendt=t+1;CNVG(t)=Rabbit_Energy;% Print the progress every 100 iterations% if mod(t,100)==0% display(['At iteration ', num2str(t), ' the best fitness is ', num2str(Rabbit_Energy)]);% endendtocend% ___________________________________function o=Levy(d)beta=1.5;sigma=(gamma(1+beta)*sin(pi*beta/2)/(gamma((1+beta)/2)*beta*2^((beta-1)/2)))^(1/beta);u=randn(1,d)*sigma;v=randn(1,d);step=u./abs(v).^(1/beta);o=step;end
⛄ 运行结果


⛄ 参考文献
[1] 吴丁杰,温立书.一种基于哈里斯鹰算法优化的核极限学习机[J].信息通信, 2021(034-011).
[2] 何敏,刘建伟,胡久松.遗传优化核极限学习机的数据分类算法[J].传感器与微系统, 2017, 36(10):3.DOI:10.13873/J.1000-9787(2017)10-0141-03.
[3] 李永贞,樊永显,杨辉华.KELMPSP:基于核极限学习机的假尿苷修饰位点识别[J].中国生物化学与分子生物学报, 2018, 34(7):9.DOI:CNKI:SUN:SWHZ.0.2018-07-014.
本文介绍了一种基于哈里斯算法优化的核极限学习机(HHO-KELM),通过哈里斯鹰优化策略改进传统KELM的权重计算,以实现对大规模数据集的更准确和高效的分类。HHO-KELM在处理大量数据时表现出更好的性能和收敛速度。
1621

被折叠的 条评论
为什么被折叠?



