SoftMax regression

本文详细介绍了如何使用Matlab和Python实现多分类任务,包括矩阵操作、概率计算、损失函数计算及梯度计算等核心步骤。通过对比两种语言的实现方式,帮助读者理解并掌握多分类问题的解决方法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

 

Matlab:

theta = reshape(theta, numClasses, inputSize);

numCases = size(data, 2);
% groundTruth = full(sparse(labels, 1:numCases, 1));
% 
labels = repmat(labels, numClasses, 1);
k = repmat((1:numClasses)',1,numCases);% numClasses×numCases. 
groundTruth = double((k == labels));% % groundTruth algrithum is the same as (k===label)
thetagrad = zeros(numClasses, inputSize);
cost = 0;
z = theta*data;
z = z - max(max(z)); % avoid overflow while keep p unchanged.
z = exp(z); % matrix product: numClasses×numCases
p = z./repmat(sum(z,1),numClasses,1); % normalize the probbility aganist numClasses. numClasses×numCases
cost = -mean(sum(groundTruth.*log(p), 1)) + sum(sum(theta.*theta)).*(lambda/2);

thetagrad = -(groundTruth - p)*(data')./numCases + theta.*lambda; % numClasses×inputSize

python:

#theta.shape==(k,n+1)
#lenda是正则化系数/权重衰减项系数,alpha是学习率
def J(X,classLabels,theta,alpha,lenda): 
    bin_classLabels=label_binarize(classLabels,classes=np.unique(classLabels).tolist()).reshape((m,k))  #二值化 (m*k) 
    dataSet=np.concatenate((X,np.ones((m,1))),axis=1).reshape((m,n+1)).T   #转换为(n+1,m)
    theta_data=theta.dot(dataSet)  #(k,m)
    theta_data = theta_data - np.max(theta_data)   #k*m
    prob_data = np.exp(theta_data) / np.sum(np.exp(theta_data), axis=0)  #(k*m)
    #print(bin_classLabels.shape,prob_data.shape
    cost = (-1 / m) * np.sum(np.multiply(bin_classLabels,np.log(prob_data).T)) + (lenda / 2) * np.sum(np.square(theta))  #标量
    #print(dataSet.shape,prob_data.shape)
    grad = (-1 / m) * (dataSet.dot(bin_classLabels - prob_data.T)).T + lenda * theta  #(k*N+1)

    return cost,grad

下式即:    =====

prob_data = np.exp(theta_data) / np.sum(np.exp(theta_data), axis=0) 

 

https://www.cnblogs.com/Deep-Learning/p/7073744.html

https://blog.youkuaiyun.com/jiede1/article/details/76983938

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值