写在之前
这是本人的统计学习方法作业之一,老师要求一定要用Matlab编程,本人在此之前未曾大量使用Matlab,因此某些算法可能因为不知道函数或者包而走了弯路。代码高亮查了一下,没找到Matlab的所以用了C的。部分算法参考了某些算法的python算法,如有建议或者错误欢迎指出,谢谢!
数据集
提供的flower.txt文件为鸢尾花数据集,共分为3类花(前50个样本为一类,中间50个样本为一类,后50个样本为一类,每一朵花包含4个维度的测量值(feature)。现只处理二分类问题,因而只使用前100个数据。其中,每组前40个用作训练,后10个用作测试。因此,该数据集训练样本80个,测试样本20个。
主程序
首先,在matlab工程新建script文件classification.m,参考代码如下:
classification.m
clear;
%% step1: 划分数据集
flower = load('flower.txt');
train_X = [flower(1:40, :);flower(51:90, :)];
train_Y = [ones(40, 1); zeros(40, 1)];
test_X = [flower(41:50, :);flower(91:end, :)];
test_Y = [ones(10, 1); zeros(10, 1)];
%% step2:学习和预测,预测test_y
output_perceptron = perceptron(train_X, train_Y, test_X);%因为随机取起点,所以每次运行分类结果不一定一样
output_knn = knn(train_X, train_Y, test_X); %简单测试,k取9效果比较好
output_logistic = logistic(train_X, train_Y, test_X);
output_entropy = entropy(train_X, train_Y, test_X);
output_tree = tree(train_X, train_Y, test_X);
output_nb = nb(train_X, train_Y, test_X); %数据集为连续型数据,需用高斯贝叶斯分类器
%% step3:分析结果,准确率召回率等
output_set = [output_perceptron;output_knn;output_logistic;output_entropy;output_tree;output_nb];
compare(test_Y,output_set);
function compare(test_Y,output)
result = [];
for i = 1:length(output(:,1))
[ACC,PRE,REC] = evaluation(test_Y,output(i,:));
result = [result;[ACC,PRE,REC]];
end
bar(result);
grid on;
legend('ACC','PRE','REC');
set(gca,'XTickLabel',{
'perceptron','knn','logistic','entropy','tree','nb'});
xlabel('分类器种类');
ylabel('指标值');
end
function [ACC,PRE,REC] = evaluation(test_Y,output) %评估分类性能,ACC分类准确率、PRE精确率、REC召回率
TP = 0;
FN = 0;
FP = 0;
TN = 0;
for i = 1:length(test_Y)
if test_Y(i)==1
if output(i)==1
TP = TP + 1;%将正类预测为正类数
else
FN = FN + 1;%将正类预测为负类数
end
else
if output(i)==1
FP = FP + 1;%将负类预测为正类数
else
TN = TN + 1;%将负类预测为负类数
end
end
end
ACC = (TP+TN)/(TP+FN+FP+TN);
PRE = TP/(TP+FP);
REC = TP/(TP+FN);
end
子程序
第二步:分别新建6个函数,perceptron,knn,logistic,entropy,tree,nb,实现相应7个分类器,将代码分别复制到以下对应文本框内。每个分类器的输入只有训练集和测试集的X,其返回结果为测试集的y(一个向量)。
perceptron.m
function [ test_Y ] = perceptron( X, Y, test_X )
%% 初始化w,b,alpha,
w = [0,0,0,0];
b = 0;
alpha = 1; % learning rate
sample = X;
for i = 1:length(Y)
if Y(i)==1
sign(i) = 1;
else
sign(i) = -1;
end
end
sign = sign';
maxstep = 1000;
%% 更新 w,b
for i=1:maxstep
[idx_misclass, counter] = class(sample, sign, w, b);%等式右边为输入,左边为输出,理解为函数调用
%obj_struct = class(struct_array,'class_name',parent_array)
%idx_misclass:误分类序号索引,counter:
if (counter~=0)%~=代表不等于
R = unidrnd(counter);%产生离散均匀随机整数(一个),即随机选取起点训练
%fprintf('%d\n',R);
w = w + alpha * sample(idx_misclass(R),:) * sign(idx_misclass(R));
b = b + alpha * sign(idx_misclass(R));
else
break
end
end
%%
for i=1:length(test_X)
if(w*test_X(i,:)'+b >=0)
test_Y(i) = 1;
else
test_Y(i) = 0;
end
end
end
function [idx_misclass, counter] = class(sample, label, w, b)
counter = 0;
idx_misclass = [];
for i=1:length(label)
if (label(i)*(w*sample(i,:)'+b)<=0) %如果有误分类点,进行迭代
idx_misclass = [idx_misclass i];
%fprintf('%d\n',idx_misclass);
counter = counter + 1;
end
end
end
knn.m
function [ test_Y ] = knn( X, Y, test_X )
k = 7;
for i = 1:length(test_X)
for j = 1:length(X)
dist