深度学习之numpy编写函数(一)

本文深入讲解深度学习中常用的数学函数,如sigmoid函数及其导数、softmax函数等,并演示了如何使用numpy进行矩阵运算,包括标准化行、计算L1和L2损失函数等,为理解深度学习算法提供坚实的数学基础。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

import numpy as np 
#计算sigmoid函数
def sigmoid(x):
    """
    compute the sigmoid of x
    Arguments:
    x--A scalar or numpy array of any size
    Return :
    s--sigmoid(x)
    """
    ###START CODE HERE ###
    s=1/(1+np.exp(-x))
    ###END CODE HERE ##
    return s
    
x=np.array([1,2,3])
sigmoid(x)
array([0.73105858, 0.88079708, 0.95257413])
#计算sigmoid倒数
def sigmoid_derivative(x):
    """
    compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
    you can store the output of the sigmoid function into variables and then use it to calculate the gradient.
    Arguments:
    x--A scalar or numpy array 
    Return:
    ds--your computed gradient.

    """
    ###START CODE HERE ###
    s=sigmoid(x)
    ds=s*(1-s)
    ###END CODE HERE ##
    return ds
    
print("sigmoid_derivative=",str(sigmoid_derivative(x)))
sigmoid_derivative= [0.19661193 0.10499359 0.04517666]
#reshape数组array中的方法,作用是将数据重新组织
def image2vector(image):
    """
    Arguments:
    image--A  numpy array of shape(length,height,depth) 
    Return:
    v--a vector of shape(length*height*depth,1)

    """
    ###START CODE HERE ###
    v=image.reshape(image.shape[0]*image.shape[1]*image.shape[2],1)
    ###END CODE HERE ##
    return v

image=np.random.randint(0,3,size=(2,3,4))
image
array([[[1, 0, 1, 2],
        [2, 0, 1, 2],
        [2, 1, 2, 1]],

       [[1, 0, 1, 0],
        [0, 1, 0, 1],
        [1, 1, 0, 2]]])
print('image2vetor=',image2vector(image))
image2vetor= [[1]
 [0]
 [1]
 [2]
 [2]
 [0]
 [1]
 [2]
 [2]
 [1]
 [2]
 [1]
 [1]
 [0]
 [1]
 [0]
 [0]
 [1]
 [0]
 [1]
 [1]
 [1]
 [0]
 [2]]
#计算normalizeRows,标准化行
def normalizeRows(x):
    """
    implement a function that normalizes each row of the matrix x (to have unit length).
    argument:
    x--a numpy matrix of shape (n,m)
    returns :
    x--the normalized(by row) numpy matrix,you are allowed to modify x,
    """
    x_norm=np.linalg.norm(x,axis=1,keepdims=True)
    x=x/x_norm
    return x

x=np.random.randint(0,3,size=(2,3))
x
array([[0, 0, 1],
       [0, 2, 2]])
print("normalizeRows(x)=",str(normalizeRows(x)))
normalizeRows(x)= [[0.         0.         1.        ]
 [0.         0.70710678 0.70710678]]
#计算softmax函数
def softmax(x):
    """
    argument:
    x--a numpy matrix of shape (n,m)
    returns :
    s--a numpy matrix equit to the softmax of x ,of shape(n,m)
    """
    x_exp=np.exp(x)
    x_sum=np.sum(x_exp,axis=1,keepdims=True)
    s=x_exp/x_sum
    return s  
softmax(x)
array([[0.21194156, 0.21194156, 0.57611688],
       [0.06337894, 0.46831053, 0.46831053]])
#乘法的区别
#dot对数组执行矩阵相乘运算
import time
x1=[1,2,3,4]
x2=[1,2,4,5]
tic=time.process_time()
dot=np.dot(x1,x2)
toc=time.process_time()
print("dot="+str(dot)+"\n---computation time="+str(1000*(toc-tic))+"ms")
dot=37
---computation time=0.0ms
#outer计算两个向量的外积
tic=time.process_time()
outer=np.outer(x1,x2)
toc=time.process_time()
print("outer="+str(outer)+"\n---computation time="+str(1000*(toc-tic))+"ms")
dot=[[ 1  2  4  5]
 [ 2  4  8 10]
 [ 3  6 12 15]
 [ 4  8 16 20]]
---computation time=0.0ms
#multiply数组和矩阵对应位置相乘,输出与相乘数组/矩阵的大小一致
tic=time.process_time()
multiply=np.multiply(x1,x2)
toc=time.process_time()
print("multiply="+str(multiply)+"\n---computation time="+str(1000*(toc-tic))+"ms")
multiply=[ 1  4 12 20]
---computation time=0.0ms
tic=time.process_time()
star=x1*x2
toc=time.process_time()
print("star_multiply="+str(star)+"\n---computation time="+str(1000*(toc-tic))+"ms")
x1 = np.array([[1, 2], [3, 4]])
x2 = np.array([[5, 6], [4, 7]])
tic=time.process_time()
star=x1*x2
toc=time.process_time()
print("star_dot="+str(star)+"\n---computation time="+str(1000*(toc-tic))+"ms")
star_multiply=[0. 3. 8.]
---computation time=0.0ms
star_dot=[[ 5 12]
 [12 28]]
---computation time=0.0ms
x1 = np.arange(2,5.0)
x2 = np.arange(3.0)
x1,x2
(array([2., 3., 4.]), array([0., 1., 2.]))
x3=x1*x2
x3
array([0., 3., 8.])
import numpy as np
def L1(yhat,y):
 """
 arguments:
     yhat--vector of size m
     y--vector of size m
     returns :
     loss--the value of the L1 loss function defined above
 """
 loss =np.sum(np.abs(y-yhat))
 return loss
yhat=np.array([0,1,0,2])
y=np.array([0.0,2,1,0])
print("L1=",str(L1(yhat,y)))

L1= 4.0

def L2(yhat,y):
 """
 arguments:
     yhat--vector of size m
     y--vector of size m
     returns :
     loss--the value of the L2 loss function defined above
 """
 loss=np.dot((yhat-y),(yhat-y).T)
 return loss
print("L2=",str(L2(yhat,y)))

L2= 6.0

参考文章:吴恩达的网易云课堂《深度学习》课程

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值