感知机
二分类模型
f ( x ) = s i g n ( w ∗ x + b ) f(x) = sign(w*x + b) f(x)=sign(w∗x+b)
损失函数 L ( w , b ) = − Σ y i ( w ∗ x i + b ) L(w, b) = -\Sigma{y_{i}(w*x_{i} + b)} L(w,b)=−Σyi(w∗xi+b)
算法
随即梯度下降法 Stochastic Gradient Descent
随机抽取一个误分类点使其梯度下降。
w = w + η y i x i w = w + \eta y_{i}x_{i} w=w+ηyixi
b = b + η y i b = b + \eta y_{i} b=b+ηyi
当实例点被误分类,即位于分离超平面的错误侧,则调整w, b的值,使分离超平面向该无分类点的一侧移动,直至误分类点被正确分类。
import numpy as np
# 数据线性可分,二分类数据
# 此处为一元一次线性方程
class Model:
def __init__(self,data):
self.w = np.ones(len(self.data[0]) - 1, dtype=np.float32)
self.b = 0
self.l_rate = 0.1
self.data = data
def sign(self, x, w, b):
y = np.dot(x, w) + b
return y
# 随机梯度下降法
def fit(self, X_train, y_train):
is_wrong = False
while not is_wrong:
wrong_count = 0
for d in range(len(X_train)):
X = X_train[d]
y = y_train[d]
if y * self.sign(X, self.w, self.b) <= 0:
self.w = self.w + self.l_rate * np.dot(y, X)
self.b = self.b + self.l_rate * y
wrong_count += 1
if wrong_count == 0:
is_wrong = True
return 'Perceptron Model!'
def score(self):
pass
scikit-learn包中提供的 Perceptron方法
from sklearn.linear_model import Perceptron
clf = Perceptron(fit_intercept=False, n_iter=1000, shuffle=False)
clf.fit(X, y)
支持向量机(SVM)
分离超平面: w T x + b = 0 w^Tx+b=0 wTx+b=0
点到直线距离: r = ∣ w T x + b ∣ ∣ ∣ w ∣ ∣ 2 r=\frac{|w^Tx+b|}{||w||_2} r=∣∣w∣∣2∣wTx+b∣
∣ ∣ w ∣ ∣ 2 ||w||_2 ∣∣w∣∣2为2-范数: ∣ ∣ w ∣ ∣ 2 = ∑ i = 1 m w i 2 2 ||w||_2=\sqrt[2]{\sum^m_{i=1}w_i^2} ∣∣w∣∣2=2∑i=1mwi2
直线为超平面,样本可表示为:
w T x + b ≥ + 1 w^Tx+b\ \geq+1 wTx+b ≥+1
w T x + b ≤ + 1 w^Tx+b\ \leq+1 wTx+b ≤+1
函数间隔: l a b e l ( w T x + b ) o r y i ( w T x + b ) label(w^Tx+b)\ or\ y_i(w^Tx+b) label(wTx+b) or yi(wTx+b)
几何间隔: r = l a b e l ( w T x + b ) ∣ ∣ w ∣ ∣ 2 r=\frac{label(w^Tx+b)}{||w||_2} r=∣∣w∣∣2label(wTx+b),当数据被正确分类时,几何间隔就是点到超平面的距离
为了求几何间隔最大,SVM基本问题可以转化为求解:( r ∗ ∣ ∣ w ∣ ∣ \frac{r^*}{||w||} ∣∣w∣∣r∗为几何间隔,( r ∗ {r^*} r∗为函数间隔)
max
r
∗
∣
∣
w
∣
∣
\max\ \frac{r^*}{||w||}
max ∣∣w∣∣r∗
(
s
u
b
j
e
c
t
t
o
)
y
i
(
w
T
x
i
+
b
)
≥
r
∗
,
i
=
1
,
2
,
.
.
,
m
(subject\ to)\ y_i({w^T}x_i+{b})\geq {r^*},\ i=1,2,..,m
(subject to) yi(wTxi+b)≥r∗, i=1,2,..,m
分类点几何间隔最大,同时被正确分类。但这个方程并非凸函数求解,所以要先①将方程转化为凸函数,②用拉格朗日乘子法和KKT条件求解对偶问题。
①转化为凸函数:
先令 r ∗ = 1 {r^*}=1 r∗=1,方便计算(参照衡量,不影响评价结果)
max
1
∣
∣
w
∣
∣
\max\ \frac{1}{||w||}
max ∣∣w∣∣1
s
.
t
.
y
i
(
w
T
x
i
+
b
)
≥
1
,
i
=
1
,
2
,
.
.
,
m
s.t.\ y_i({w^T}x_i+{b})\geq {1},\ i=1,2,..,m
s.t. yi(wTxi+b)≥1, i=1,2,..,m
再将
max
1
∣
∣
w
∣
∣
\max\ \frac{1}{||w||}
max ∣∣w∣∣1转化成
min
1
2
∣
∣
w
∣
∣
2
\min\ \frac{1}{2}||w||^2
min 21∣∣w∣∣2求解凸函数,1/2是为了求导之后方便计算。
min
1
2
∣
∣
w
∣
∣
2
\min\ \frac{1}{2}||w||^2
min 21∣∣w∣∣2
s
.
t
.
y
i
(
w
T
x
i
+
b
)
≥
1
,
i
=
1
,
2
,
.
.
,
m
s.t.\ y_i(w^Tx_i+b)\geq 1,\ i=1,2,..,m
s.t. yi(wTxi+b)≥1, i=1,2,..,m
②用拉格朗日乘子法和KKT条件求解最优值:
min
1
2
∣
∣
w
∣
∣
2
\min\ \frac{1}{2}||w||^2
min 21∣∣w∣∣2
s
.
t
.
−
y
i
(
w
T
x
i
+
b
)
+
1
≤
0
,
i
=
1
,
2
,
.
.
,
m
s.t.\ -y_i(w^Tx_i+b)+1\leq 0,\ i=1,2,..,m
s.t. −yi(wTxi+b)+1≤0, i=1,2,..,m
整合成:
L
(
w
,
b
,
α
)
=
1
2
∣
∣
w
∣
∣
2
+
∑
i
=
1
m
α
i
(
−
y
i
(
w
T
x
i
+
b
)
+
1
)
L(w, b, \alpha) = \frac{1}{2}||w||^2+\sum^m_{i=1}\alpha_i(-y_i(w^Tx_i+b)+1)
L(w,b,α)=21∣∣w∣∣2+i=1∑mαi(−yi(wTxi+b)+1)
推导:
min
f
(
x
)
=
min
max
L
(
w
,
b
,
α
)
≥
max
min
L
(
w
,
b
,
α
)
\min\ f(x)=\min \max\ L(w, b, \alpha)\geq \max \min\ L(w, b, \alpha)
min f(x)=minmax L(w,b,α)≥maxmin L(w,b,α)
详细过程可看:https://blog.youkuaiyun.com/b285795298/article/details/81977271
里面还描述了SVM和LR(逻辑回归)的异同,感觉很不错
根据KKT条件:
∂
∂
w
L
(
w
,
b
,
α
)
=
w
−
∑
α
i
y
i
x
i
=
0
,
w
=
∑
α
i
y
i
x
i
\frac{\partial }{\partial w}L(w, b, \alpha)=w-\sum\alpha_iy_ix_i=0,\ w=\sum\alpha_iy_ix_i
∂w∂L(w,b,α)=w−∑αiyixi=0, w=∑αiyixi
∂
∂
b
L
(
w
,
b
,
α
)
=
∑
α
i
y
i
=
0
\frac{\partial }{\partial b}L(w, b, \alpha)=\sum\alpha_iy_i=0
∂b∂L(w,b,α)=∑αiyi=0
带入
L
(
w
,
b
,
α
)
L(w, b, \alpha)
L(w,b,α)
min L ( w , b , α ) = 1 2 ∣ ∣ w ∣ ∣ 2 + ∑ i = 1 m α i ( − y i ( w T x i + b ) + 1 ) \min\ L(w, b, \alpha)=\frac{1}{2}||w||^2+\sum^m_{i=1}\alpha_i(-y_i(w^Tx_i+b)+1) min L(w,b,α)=21∣∣w∣∣2+∑i=1mαi(−yi(wTxi+b)+1)
= 1 2 w T w − ∑ i = 1 m α i y i w T x i − b ∑ i = 1 m α i y i + ∑ i = 1 m α i \qquad\qquad\qquad=\frac{1}{2}w^Tw-\sum^m_{i=1}\alpha_iy_iw^Tx_i-b\sum^m_{i=1}\alpha_iy_i+\sum^m_{i=1}\alpha_i =21wTw−∑i=1mαiyiwTxi−b∑i=1mαiyi+∑i=1mαi
= 1 2 w T ∑ α i y i x i − ∑ i = 1 m α i y i w T x i + ∑ i = 1 m α i \qquad\qquad\qquad=\frac{1}{2}w^T\sum\alpha_iy_ix_i-\sum^m_{i=1}\alpha_iy_iw^Tx_i+\sum^m_{i=1}\alpha_i =21wT∑αiyixi−∑i=1mαiyiwTxi+∑i=1mαi
= ∑ i = 1 m α i − 1 2 ∑ i = 1 m α i y i w T x i \qquad\qquad\qquad=\sum^m_{i=1}\alpha_i-\frac{1}{2}\sum^m_{i=1}\alpha_iy_iw^Tx_i =∑i=1mαi−21∑i=1mαiyiwTxi
= ∑ i = 1 m α i − 1 2 ∑ i , j = 1 m α i α j y i y j ( x i x j ) \qquad\qquad\qquad=\sum^m_{i=1}\alpha_i-\frac{1}{2}\sum^m_{i,j=1}\alpha_i\alpha_jy_iy_j(x_ix_j) =∑i=1mαi−21∑i,j=1mαiαjyiyj(xixj)
再把max问题转成min问题:
max ∑ i = 1 m α i − 1 2 ∑ i , j = 1 m α i α j y i y j ( x i x j ) = min 1 2 ∑ i , j = 1 m α i α j y i y j ( x i x j ) − ∑ i = 1 m α i \max\ \sum^m_{i=1}\alpha_i-\frac{1}{2}\sum^m_{i,j=1}\alpha_i\alpha_jy_iy_j(x_ix_j)=\min \frac{1}{2}\sum^m_{i,j=1}\alpha_i\alpha_jy_iy_j(x_ix_j)-\sum^m_{i=1}\alpha_i max ∑i=1mαi−21∑i,j=1mαiαjyiyj(xixj)=min21∑i,j=1mαiαjyiyj(xixj)−∑i=1mαi
s . t . ∑ i = 1 m α i y i = 0 , s.t.\ \sum^m_{i=1}\alpha_iy_i=0, s.t. ∑i=1mαiyi=0,
α i ≥ 0 , i = 1 , 2 , . . . , m \alpha_i \geq 0,i=1,2,...,m αi≥0,i=1,2,...,m
以上为SVM对偶问题的对偶形式
核函数
在低维空间计算获得高维空间的计算结果,也就是说计算结果满足高维(满足高维,才能说明高维下线性可分)。
松弛变量
引入松弛变量 ξ ≥ 0 \xi\geq0 ξ≥0,对应数据点允许偏离的functional margin 的量。
目标函数: min 1 2 ∣ ∣ w ∣ ∣ 2 + C ∑ ξ i s . t . y i ( w T x i + b ) ≥ 1 − ξ i \min\ \frac{1}{2}||w||^2+C\sum\xi_i\qquad s.t.\ y_i(w^Tx_i+b)\geq1-\xi_i min 21∣∣w∣∣2+C∑ξis.t. yi(wTxi+b)≥1−ξi
对偶问题:
max ∑ i = 1 m α i − 1 2 ∑ i , j = 1 m α i α j y i y j ( x i x j ) = min 1 2 ∑ i , j = 1 m α i α j y i y j ( x i x j ) − ∑ i = 1 m α i \max\ \sum^m_{i=1}\alpha_i-\frac{1}{2}\sum^m_{i,j=1}\alpha_i\alpha_jy_iy_j(x_ix_j)=\min \frac{1}{2}\sum^m_{i,j=1}\alpha_i\alpha_jy_iy_j(x_ix_j)-\sum^m_{i=1}\alpha_i max i=1∑mαi−21i,j=1∑mαiαjyiyj(xixj)=min21i,j=1∑mαiαjyiyj(xixj)−i=1∑mαi s . t . C ≥ α i ≥ 0 , i = 1 , 2 , . . . , m ∑ i = 1 m α i y i = 0 , s.t.\ C\geq\alpha_i \geq 0,i=1,2,...,m\quad \sum^m_{i=1}\alpha_iy_i=0, s.t. C≥αi≥0,i=1,2,...,mi=1∑mαiyi=0,
import numpy as np
class SVM:
def __init__(self, max_iter=100, kernel='linear'):
self.max_iter = max_iter
self._kernel = kernel
def init_args(self, features, labels):
self.m, self.n = features.shape
self.X = features
self.Y = labels
self.b = 0.0
# 将Ei保存在一个列表里
self.alpha = np.ones(self.m)
self.E = [self._E(i) for i in range(self.m)]
# 松弛变量
self.C = 1.0
def _KKT(self, i):
y_g = self._g(i) * self.Y[i]
if self.alpha[i] == 0:
return y_g >= 1
elif 0 < self.alpha[i] < self.C:
return y_g == 1
else:
return y_g <= 1
# g(x)预测值,输入xi(X[i])
def _g(self, i):
r = self.b
for j in range(self.m):
r += self.alpha[j] * self.Y[j] * self.kernel(self.X[i], self.X[j])
return r
# 核函数
def kernel(self, x1, x2):
if self._kernel == 'linear':
return sum([x1[k] * x2[k] for k in range(self.n)])
elif self._kernel == 'poly':
return (sum([x1[k] * x2[k] for k in range(self.n)]) + 1) ** 2
return 0
# E(x)为g(x)对输入x的预测值和y的差
def _E(self, i):
return self._g(i) - self.Y[i]
def _init_alpha(self):
# 外层循环首先遍历所有满足0<a<C的样本点,检验是否满足KKT
index_list = [i for i in range(self.m) if 0 < self.alpha[i] < self.C]
# 否则遍历整个训练集
non_satisfy_list = [i for i in range(self.m) if i not in index_list]
index_list.extend(non_satisfy_list)
for i in index_list:
if self._KKT(i):
continue
E1 = self.E[i]
# 如果E2是+,选择最小的;如果E2是负的,选择最大的
if E1 >= 0:
j = min(range(self.m), key=lambda x: self.E[x])
else:
j = max(range(self.m), key=lambda x: self.E[x])
return i, j
def _compare(self, _alpha, L, H):
if _alpha > H:
return H
elif _alpha < L:
return L
else:
return _alpha
def fit(self, features, labels):
self.init_args(features, labels)
for t in range(self.max_iter):
# train
i1, i2 = self._init_alpha()
# 边界
if self.Y[i1] == self.Y[i2]:
L = max(0, self.alpha[i1] + self.alpha[i2] - self.C)
H = min(self.C, self.alpha[i1] + self.alpha[i2])
else:
L = max(0, self.alpha[i2] - self.alpha[i1])
H = min(self.C, self.C + self.alpha[i2] - self.alpha[i1])
E1 = self.E[i1]
E2 = self.E[i2]
# eta=K11+K22-2K12
eta = self.kernel(self.X[i1], self.X[i1]) + self.kernel(self.X[i2], self.X[i2]) - 2 * self.kernel(
self.X[i1], self.X[i2])
if eta <= 0:
# print('eta <= 0')
continue
alpha2_new_unc = self.alpha[i2] + self.Y[i2] * (E2 - E1) / eta
alpha2_new = self._compare(alpha2_new_unc, L, H)
alpha1_new = self.alpha[i1] + self.Y[i1] * self.Y[i2] * (self.alpha[i2] - alpha2_new)
b1_new = -E1 - self.Y[i1] * self.kernel(self.X[i1], self.X[i1]) * (alpha1_new - self.alpha[i1]) - self.Y[
i2] * self.kernel(self.X[i2], self.X[i1]) * (alpha2_new - self.alpha[i2]) + self.b
b2_new = -E2 - self.Y[i1] * self.kernel(self.X[i1], self.X[i2]) * (alpha1_new - self.alpha[i1]) - self.Y[
i2] * self.kernel(self.X[i2], self.X[i2]) * (alpha2_new - self.alpha[i2]) + self.b
if 0 < alpha1_new < self.C:
b_new = b1_new
elif 0 < alpha2_new < self.C:
b_new = b2_new
else:
# 选择中点
b_new = (b1_new + b2_new) / 2
# 更新参数
self.alpha[i1] = alpha1_new
self.alpha[i2] = alpha2_new
self.b = b_new
self.E[i1] = self._E(i1)
self.E[i2] = self._E(i2)
return 'train done!'
def predict(self, data):
r = self.b
for i in range(self.m):
r += self.alpha[i] * self.Y[i] * self.kernel(data, self.X[i])
return 1 if r > 0 else -1
def score(self, X_test, y_test):
right_count = 0
for i in range(len(X_test)):
result = self.predict(X_test[i])
if result == y_test[i]:
right_count += 1
return right_count / len(X_test)
def _weight(self):
# linear model
yx = self.Y.reshape(-1, 1) * self.X
self.w = np.dot(yx.T, self.alpha)
return self.w
scikit-learn包中提供的 SVC方法
from sklearn.svm import SVC
clf = SVC()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
sklearn.svm.SVC
(C=1.0, kernel=‘rbf’, degree=3, gamma=‘auto’, coef0=0.0, shrinking=True, probability=False,tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape=None,random_state=None)
参数:
C:C-SVC的惩罚参数 默认值是1.0
C越大,相当于惩罚松弛变量,希望松弛变量接近0,即对误分类的惩罚增大,趋向于对训练集全分对的情况,这样对训练集测试时准确率很高,但泛化能力弱。C值小,对误分类的惩罚减小,允许容错,将他们当成噪声点,泛化能力较强。
kernel :核函数,默认是rbf,可以是‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’
– 线性:u’v
– 多项式:(gammau’v + coef0)^degree
– RBF函数:exp(-gamma|u-v|^2)
– sigmoid:tanh(gammau’v + coef0)
degree :多项式poly函数的维度,默认是3,选择其他核函数时会被忽略。
gamma : ‘rbf’,‘poly’ 和‘sigmoid’的核函数参数。默认是’auto’,则会选择1/n_features
coef0 :核函数的常数项。对于‘poly’和 ‘sigmoid’有用。
probability :是否采用概率估计?.默认为False
shrinking :是否采用shrinking heuristic方法,默认为true
tol :停止训练的误差值大小,默认为1e-3
cache_size :核函数cache缓存大小,默认为200
class_weight :类别的权重,字典形式传递。设置第几类的参数C为weight*C(C-SVC中的C)
verbose :允许冗余输出?
max_iter :最大迭代次数。-1为无限制。
decision_function_shape :‘ovo’, ‘ovr’ or None, default=None3
random_state :数据洗牌时的种子值,int值
主要调节的参数有:C、kernel、degree、gamma、coef0。
参考:https://github.com/wzyonggege/statistical-learning-method