机器学习1:逻辑回归(Logistic regression)

逻辑回归的代码实现

逻辑回归是机器学习中经典的二分类方法:

代码需要完成的模块:

1.sigmoid:映射到概率的函数

g(z)=11+e−z g\left ( z \right )=\frac{1}{1+e^{-z}} g(z)=1+ez1

2.model:返回预测结果值

hθ(x)=g(θTx)=11+e−θTx h_{\theta }\left ( x \right )=g\left ( \theta ^{T}x \right )=\frac{1}{1+e^{-\theta ^{T}x}} hθ(x)=g(θTx)=1+eθTx1

3.loss:根据参数计算损失

将对数似然函数取负号,转化为求梯度下降:
l(hθ(x),y)=−y∗log(hθ(x))−(1−y)∗log(1−hθ(x)) l\left ( h_{\theta }\left ( x \right ),y \right )=-y\ast log\left ( h_{\theta }\left ( x \right ) \right )-\left ( 1-y \right )\ast log\left ( 1-h_{\theta } \left ( x \right )\right ) l(hθ(x),y)=ylog(hθ(x))(1y)log(1hθ(x))
每次求平均损失:
L(θ)=1n∑ni=1l(hθ(xi),yi) L\left ( \theta \right )=\frac{1}{n}\sum_{n}^{i=1}l\left ( h_{\theta } \left ( x_{i} \right ),y_{i}\right ) L(θ)=n1ni=1l(hθ(xi),yi)

4.gradient:计算每个参数的梯度方向

∂L(θ)∂θj=1n∑i=1n(hθ(xi)−yi)xij \frac{\partial L\left ( \theta \right )}{\partial \theta _{j}}=\frac{1}{n}\sum_{i=1}^{n}\left (h_{\theta \left ( x_{i} \right )}- y_{i} \right )x_{ij} θjL(θ)=n1i=1n(hθ(xi)yi)xij

5.descent:进行参数更新

θj=θj−α×∂L(θ)∂θj \theta _{j}=\theta _{j}-\alpha \times \frac{\partial L\left ( \theta \right )}{\partial \theta _{j}} θj=θjα×θjL(θ)
其中α\alphaα为学习率。

6.accuracy:计算准确度

python代码实现:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import time

#sigmoid
def sigmoid(z):
    return 1/(1 + np.exp(-z))

#model
def model(X, theta):
    return sigmoid(np.dot(X, theta.T))

#loss
def loss(X, y, theta):
    left = np.multiply(-y, np.log(model(X, theta)))
    right = np.multiply(1 - y, np.log(1 - model(X, theta)))
    return np.sum(left - right)/(len(X))

#gradient
def gradient(X, y, theta):
    grad = np.zeros(theta.shape)
    error = (model(X, theta) - y).ravel()
    for j in range(len(theta.ravel())):
        term = np.multiply(error, X[:, j])
        grad[0, j] = np.sum(term)/len(X)
    return grad

#Gradient descent
def stopCriterion(value, threshold):
    return value > threshold

def shuffleData(data):
    np.random.shuffle(data)
    cols = data.shape[1]
    X = data[:, 0:cols - 1]
    y = data[:, cols - 1:]
    return X, y

def descent(data, theta, batchSize, thresh, alpha):
    n = len(data)
    init_time = time.time()
    i = 0
    k = 0
    X, y =shuffleData(data)
    costs = [loss(X, y, theta)]

    while True:
        grad = gradient(X[k:k+batchSize], y[k:k+batchSize], theta)
        k += batchSize
        if k >= n:
            k = 0
            X, y = shuffleData(data)
        theta = theta - alpha*grad
        costs.append(loss(X, y, theta))
        i += 1
        value = i
        if stopCriterion(value, thresh): break
    return theta, costs, time.time() - init_time

if __name__ == '__main__':

    data_path = "data/LogiReg_data.txt"
    pdData = pd.read_csv(data_path, header=None, names=['Exam 1', 'Exam 2', 'Admitted'])

    pdData.insert(0, 'ones', 1)
    orig_data = pdData.as_matrix()
    theta = np.zeros([1, 3])

    batchSize = 100
    theta, costs, dur = descent(orig_data, theta, batchSize, thresh=5000, alpha=0.000001)
    fig, ax = plt.subplots(figsize=(12,4))
    ax.plot(np.arange(len(costs)), costs, 'r')
    ax.set_xlabel('Iterations')
    ax.set_ylabel('Loss')
    ax.set_title(' - Error vs. Iteration')
    plt.show()

损失函数下降图:
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值