机器学习8/100天-Logistic回归原理与实现

本文深入探讨了Logistic回归的原理与实现过程,详细解释了最大似然函数及其对数形式,展示了如何通过梯度下降法求解参数。同时,提供了从权重初始化到预测的完整Python代码实现。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Day 8 Logistic回归原理与实现

github: 100DaysOfMLCode

最大似然函数

L=∏i=0npyi(1−p)1−yiL = \prod_{i=0}^np^{y_i}(1-p)^{1-y_i}L=i=0npyi(1p)1yi
取对数得
L=∑i=0nyilog(p)+(1−yi)log(1−p)L = \sum_{i=0}^ny_ilog(p)+(1-y_i)log(1-p)L=i=0nyilog(p)+(1yi)log(1p)
p(z)=11+ezp(z) = \frac{1}{1+e^{z}}p(z)=1+ez1
z=wx+bz = wx + bz=wx+b
因此梯度下降法求偏导数可得:
∂L∂w=∂L∂p∂p∂z∂z∂w=(−yp+1−y1−p)(p(1−p))x=(a−y)x\frac{\partial{L}}{\partial{w}}=\frac{\partial{L}}{\partial{p}}\frac{\partial{p}}{\partial{z}}\frac{\partial{z}}{\partial{w}}=(\frac{-y}{p}+\frac{1-y}{1-p})(p(1-p))x=(a-y)xwL=pLzpwz=(py+1p1y)(p(1p))x=(ay)x
∂L∂b=(a−y)\frac{\partial{L}}{\partial{b}}=(a-y)bL=(ay)

def weightInitialization(n_features):
    w = np.zeros((1,n_features))
    b = 0
    return w,b
def sigmoid_activation(result):
    final_result = 1/(1+np.exp(-result))
    return final_result

def model_optimize(w, b, X, Y):
    m = X.shape[0]
    
    #Prediction
    final_result = sigmoid_activation(np.dot(w,X.T)+b)
    Y_T = Y.T
    cost = (-1/m)*(np.sum((Y_T*np.log(final_result)) + ((1-Y_T)*(np.log(1-final_result)))))
    #
    
    #Gradient calculation
    dw = (1/m)*(np.dot(X.T, (final_result-Y.T).T))
    db = (1/m)*(np.sum(final_result-Y.T))
    
    grads = {"dw": dw, "db": db}
    
    return grads, cost
def model_predict(w, b, X, Y, learning_rate, no_iterations):
    costs = []
    for i in range(no_iterations):
        #
        grads, cost = model_optimize(w,b,X,Y)
        #
        dw = grads["dw"]
        db = grads["db"]
        #weight update
        w = w - (learning_rate * (dw.T))
        b = b - (learning_rate * db)
        #
        
        if (i % 100 == 0):
            costs.append(cost)
            #print("Cost after %i iteration is %f" %(i, cost))
    
    #final parameters
    coeff = {"w": w, "b": b}
    gradient = {"dw": dw, "db": db}
    
    return coeff, gradient, costs
def predict(final_pred, m):
    y_pred = np.zeros((1,m))
    for i in range(final_pred.shape[1]):
        if final_pred[0][i] > 0.5:
            y_pred[0][i] = 1
    return y_pred
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值