逻辑回归的代码实现
逻辑回归是机器学习中经典的二分类方法:
代码需要完成的模块:
1.sigmoid:映射到概率的函数
g(z)=11+e−z g\left ( z \right )=\frac{1}{1+e^{-z}} g(z)=1+e−z1
2.model:返回预测结果值
hθ(x)=g(θTx)=11+e−θTx h_{\theta }\left ( x \right )=g\left ( \theta ^{T}x \right )=\frac{1}{1+e^{-\theta ^{T}x}} hθ(x)=g(θTx)=1+e−θTx1
3.loss:根据参数计算损失
将对数似然函数取负号,转化为求梯度下降:
l(hθ(x),y)=−y∗log(hθ(x))−(1−y)∗log(1−hθ(x))
l\left ( h_{\theta }\left ( x \right ),y \right )=-y\ast log\left ( h_{\theta }\left ( x \right ) \right )-\left ( 1-y \right )\ast log\left ( 1-h_{\theta } \left ( x \right )\right )
l(hθ(x),y)=−y∗log(hθ(x))−(1−y)∗log(1−hθ(x))
每次求平均损失:
L(θ)=1n∑ni=1l(hθ(xi),yi)
L\left ( \theta \right )=\frac{1}{n}\sum_{n}^{i=1}l\left ( h_{\theta } \left ( x_{i} \right ),y_{i}\right )
L(θ)=n1n∑i=1l(hθ(xi),yi)
4.gradient:计算每个参数的梯度方向
∂L(θ)∂θj=1n∑i=1n(hθ(xi)−yi)xij \frac{\partial L\left ( \theta \right )}{\partial \theta _{j}}=\frac{1}{n}\sum_{i=1}^{n}\left (h_{\theta \left ( x_{i} \right )}- y_{i} \right )x_{ij} ∂θj∂L(θ)=n1i=1∑n(hθ(xi)−yi)xij
5.descent:进行参数更新
θj=θj−α×∂L(θ)∂θj
\theta _{j}=\theta _{j}-\alpha \times \frac{\partial L\left ( \theta \right )}{\partial \theta _{j}}
θj=θj−α×∂θj∂L(θ)
其中α\alphaα为学习率。
6.accuracy:计算准确度
python代码实现:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import time
#sigmoid
def sigmoid(z):
return 1/(1 + np.exp(-z))
#model
def model(X, theta):
return sigmoid(np.dot(X, theta.T))
#loss
def loss(X, y, theta):
left = np.multiply(-y, np.log(model(X, theta)))
right = np.multiply(1 - y, np.log(1 - model(X, theta)))
return np.sum(left - right)/(len(X))
#gradient
def gradient(X, y, theta):
grad = np.zeros(theta.shape)
error = (model(X, theta) - y).ravel()
for j in range(len(theta.ravel())):
term = np.multiply(error, X[:, j])
grad[0, j] = np.sum(term)/len(X)
return grad
#Gradient descent
def stopCriterion(value, threshold):
return value > threshold
def shuffleData(data):
np.random.shuffle(data)
cols = data.shape[1]
X = data[:, 0:cols - 1]
y = data[:, cols - 1:]
return X, y
def descent(data, theta, batchSize, thresh, alpha):
n = len(data)
init_time = time.time()
i = 0
k = 0
X, y =shuffleData(data)
costs = [loss(X, y, theta)]
while True:
grad = gradient(X[k:k+batchSize], y[k:k+batchSize], theta)
k += batchSize
if k >= n:
k = 0
X, y = shuffleData(data)
theta = theta - alpha*grad
costs.append(loss(X, y, theta))
i += 1
value = i
if stopCriterion(value, thresh): break
return theta, costs, time.time() - init_time
if __name__ == '__main__':
data_path = "data/LogiReg_data.txt"
pdData = pd.read_csv(data_path, header=None, names=['Exam 1', 'Exam 2', 'Admitted'])
pdData.insert(0, 'ones', 1)
orig_data = pdData.as_matrix()
theta = np.zeros([1, 3])
batchSize = 100
theta, costs, dur = descent(orig_data, theta, batchSize, thresh=5000, alpha=0.000001)
fig, ax = plt.subplots(figsize=(12,4))
ax.plot(np.arange(len(costs)), costs, 'r')
ax.set_xlabel('Iterations')
ax.set_ylabel('Loss')
ax.set_title(' - Error vs. Iteration')
plt.show()
损失函数下降图: