最近看了PCA主成分分析,其中KL变化是其中的一种方法
具体的原理我转载了以下文章
http://blog.youkuaiyun.com/kingskyleader/article/details/7734710
先贴一记代码
clear all;
close all;
N=500;
for i=1:N
x1(1,i)=-2+0.8*randn(1);
x1(2,i)=-1+0.9*randn(1);
x1(3,i)= 2+0.7*randn(1);
x1(4,i)=-1+1.5*randn(1);
x1(5,i)=-1+1.4*randn(1);
x1(6,i)=-2+1.1*randn(1);
end;
for i=1:N
x2(1,i)= 1+0.9*randn(1);
x2(2,i)=-2+0.7*randn(1);
x2(3,i)=-2+0.8*randn(1);
x2(4,i)= 1+1.2*randn(1);
x2(5,i)=-1+1.1*randn(1);
x2(6,i)=-1+1.0*randn(1);
end;
for i=1:N
x3(1,i)=-2+0.7*randn(1);
x3(2,i)= 2+0.8*randn(1);
x3(3,i)=-1+0.9*randn(1);
x3(4,i)=-1+1.3*randn(1);
x3(5,i)= 1+1.2*randn(1);
x3(6,i)= 1+1.3*randn(1);
end;
x1=x1';
x2=x2';
x3=x3';
X=[x1;x2;x3];
%%计算原始向量的均值矩阵
mean_X=mean(X);
sizex=size(X);
num= sizex(1,2);
for i=1:sizex(1,1)
mean_XMatrix(i,:)=mean_X;
end
%%做PCA主成分分析
[COEFF,SCORE,latent,tsquare] = princomp(X);
%%降维 提取出两维的特征向量
X_temp=X-mean_XMatrix;
COEFF=COEFF(:,1:2);
Y=X_temp*COEFF;
%绘制降维后的特征向量分布
y1_new=Y(1:500,:);
y2_new=Y(501:1000,:);
y3_new=Y(1001:1500,:);
figure
plot(y1_new(:,1),y1_new(:,2),'*',y2_new(:,1),y2_new(:,2),'o',y3_new(:,1),y3_new(:,2),'x');
title('特征选择');
%%恢复重建原始信号
X_decrease=Y*COEFF'+mean_XMatrix;
%%对降维后的信号进行主成分分析,计算与原始信号的误差
[COEFF2,SCORE2,latent2,tsquare2] = princomp(X_decrease);
errfeature=latent-latent2;
error=X-X_decrease;
以上代码实现的是将一个6维的特征向量降到两维后显示
好了,下面介绍原理