吴恩达机器学习第二课第一次编程作业: Neural Networks for Binary Classification

文章介绍了如何使用KerasSequential模型和Dense层,包括单例数据的处理(Exercise1)以及如何扩展到矩阵操作(Exercise2中的for循环实现vsExercise3中的vectorization)。重点在于sigmoid激活函数和层计算的优化。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Exercise 1

Below, using Keras Sequential model and Dense Layer with a sigmoid activation to construct the network described above.

model = Sequential(
    [               
        tf.keras.Input(shape=(400,)),    #specify input size
        ### START CODE HERE ### 
        tf.keras.layers.Dense(units=25,activation='sigmoid'),
        tf.keras.layers.Dense(units=15,activation='sigmoid'),
        tf.keras.layers.Dense(units=1,activation='sigmoid'),
        ### END CODE HERE ### 
    ], name = "my_model" 
)                            

Exercise 2

Below, build a dense layer subroutine. The example in lecture utilized a for loop to visit each unit (j) in the layer and perform the dot product of the weights for that unit (W[:,j]) and sum the bias for the unit (b[j]) to form z. An activation function g(z) is then applied to that result. This section will not utilize some of the matrix operations described in the optional lectures. These will be explored in a later section.

def my_dense(a_in, W, b, g):
    """
    Computes dense layer
    Args:
      a_in (ndarray (n, )) : Data, 1 example 
      W    (ndarray (n,j)) : Weight matrix, n features per unit, j units
      b    (ndarray (j, )) : bias vector, j units  
      g    activation function (e.g. sigmoid, relu..)
    Returns
      a_out (ndarray (j,))  : j units
    """
    units = W.shape[1]
    a_out = np.zeros(units)
### START CODE HERE ### 
    for i in range(units):
        wi=W[:,i]#将w的第i列提取出来,返回一个一维数组或者说是列向量
        a_out[i]=g(np.dot(a_in,wi)+b[i])
        #每一个神经元都会与所有输入进行运算,再加上这个神经元本身的偏置,经过激活函数,变成这个神经元的输出
        #所有神经元的输出统合为本层输出
### END CODE HERE ### 
    return(a_out)

Exercise 3

Below, compose a new my_dense_v subroutine that performs the layer calculations for a matrix of examples. This will utilize np.matmul().

Note: This function is not graded because it is discussed in the optional lectures on vectorization. If you didn’t go through them, feel free to click the hints below the expected code to see the code. You can also submit the notebook even with a blank answer here.

def my_dense_v(A_in, W, b, g):
    """
    Computes dense layer
    Args:
      A_in (ndarray (m,n)) : Data, m examples, n features each
      W    (ndarray (n,j)) : Weight matrix, n features per unit, j units
      b    (ndarray (1,j)) : bias vector, j units  
      g    activation function (e.g. sigmoid, relu..)
    Returns
      A_out (tf.Tensor or ndarray (m,j)) : m examples, j units
    """
### START CODE HERE ### 
    A_out=g(np.matmul(A_in,W)+b)
    
### END CODE HERE ### 
    return(A_out)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Magic171

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值