Udacity机器学习笔记——深度学习(2)

Udacity机器学习笔记——深度学习(2)

感知器

  1. 感知器或者神经元是神经网络的基础单元,它们对输入的数据进行判断,比如说输入一个学生的学业成绩和考试成绩,然后感知器根据这两个值来判断该学生是否被某大学录取。那么,感知器是根据什么规则来对这两个值进行比较从而得出结论的呢?感知器更加关注学生的学业成绩还是考试成绩呢?这里就需要引入权重的概念。
  2. 分别引入两个权重,分别于学业成绩和考试成绩进行相乘,权重越大,那么说明对应的成绩也就更加重要。一开始,这两个权重是随机的,那么感知器训练通过学习,基于上一次的分类结果的误差不断地调整权重,从而获知什么样的成绩会被大学所录取。用数学符号表示就是:
    wgrades⋅xgrades+wtest⋅xtest w_{grades} \cdot x_{grades} + w_{test} \cdot x_{test} wgradesxgrades+wtestxtest
    如果由m个输入,相应地得到下面的式子:
    ∑i=1mwi⋅xi \sum_{i=1}^{m} w_{i} \cdot x_{i} i=1mwixi
  3. 最后,上面的相加式子变成一个输出结果,通过将该式子作为一个激活函数的输入而得到。一个最简单的激活函数就是Heaviside step function,阶跃函数:
    f(h)={0if h&lt;0,1if h≥0. f(h) = \begin{cases} 0&amp; \quad \text {if $h&lt;0$}, \\ 1&amp; \quad \text {if $h \ge 0$}. \end {cases} f(h)={01if h<0,if h0.
    将上面介绍的相加式子带入该函数,并且引入偏差,可以得到感知器的公式:
    f(x1,x2,...,xm)={0if b+∑wi⋅xi&lt;0,1if b+∑wi⋅xi≥0. f(x_{1}, x_{2}, ..., x_{m}) = \begin{cases} 0&amp; \quad \text {if $b+\sum w_{i} \cdot x_{i} &lt; 0$}, \\ 1&amp; \quad \text {if $b+\sum w_{i} \cdot x_{i} \ge 0$}. \end {cases} f(x1,x2,...,xm)={01if b+wixi<0,if b+wixi0.
  4. 根据感知器的公式,我们可以推断出一组适合AND神经元的权重和偏差。例如,当两个输入值都为1时,可以设置w1w_{1}w1w2w_{2}w2分别为 1,设置bbb为-2,那么仅当两个输入值都为1的情况,才可以得到b+∑wi⋅xi≥0b+\sum w_{i} \cdot x_{i} \ge 0b+wixi0的结果,此时式子等于0。
import pandas as pd


weight1 = 1.5
weight2 = 1.0
bias = -2.0

test_inputs = [(0,0), (0,1), (1,0), (1,1)]
correct_outputs = [False, False, False, True]
outputs = []

for test_input, correct_output in zip(test_inputs, correct_outputs):
    linear_combination = weight1*test_input[0] + weight2*test_input[1]+bias
    output = int(linear_combination >= 0)
    is_correct_string = 'Yes' if output == correct_output else 'No'
    outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])

num_wrong = len([output[4] for output in outputs if output[4] == 'No'])
output_frame = pd.DataFrame(outputs, columns=['Input 1', 'Input 2', 'Linear Combination', 'Activation Output', 'Is Correct'])
if not num_wrong:
    print('Nice! You got it all correct. \n')
else:
    print('You got {} wrong. Keep trying! \n'.format(num_wrong))
print(output_frame.to_string(index=False))

通过修改weight1和weight2,以及bias的值,还有correct_outputs的值,可以相应地得到OR和NOT,以及XOR神经元。

梯度下降

  1. 梯度下降是帮助寻找到最小化成本函数的权重和偏重的方法。定义成本函数为:
    C(w,b)≡12n∑x∥y(x)−a∥2 C(w,b) \equiv \frac{1} {2n} \sum_{x} \| y(x)-a \|^{2} C(w,b)2n1xy(x)a2
    假设v=(w,b)v=(w,b)v=(w,b)
    ΔC≈∇C⋅Δv \Delta C \approx \nabla C \cdot \Delta vΔCCΔv
    Δv=−η∇C \Delta v = - \eta \nabla C Δv=ηC
    η\etaη称之为学习率,是个小的正数。
    从而可以得到:
    v→v′=v−η∇C v \rightarrow v&#x27;= v - \eta \nabla C vv=vηC
    展开可以得到:
    wk→wk′=wk−η∂C∂wk w_{k} \rightarrow w&#x27;_{k} = w_{k} - \eta \frac {\partial{C}} {\partial w_{k}} wkwk=wkηwkC
    bl→bl′=bl−η∂C∂bl b_{l} \rightarrow b&#x27;_{l} = b_{l} - \eta \frac {\partial{C}} {\partial b _{l}} blbl=blηblC
    定义误差:
    δjl≡∂C∂zjl \delta_{j}^{l} \equiv \frac {\partial {C}} {\partial {z_{j}^{l}} }δjlzjlC
    其中zjl=∑kwjklakl−1+bjl z_{j}^{l}=\sum_{k} w_{jk}^{l} a_{k}^{l-1} + b_{j}^{l} zjl=kwjklakl1+bjl 表示神经网络第lll层第jjj个神经元的输入,它是对所有(l−1)(l-1)(l1)层的所有神经元输出akl−1a_{k}^{l-1}akl1进行求和。
    进一步,可以得到以下四个等式:
    δL=∇aC⊙σ′(zL) \delta^{L} = \nabla_{a} C \odot \sigma&#x27;(z^{L}) δL=aCσ(zL)
    δl=((wl+1)Tδl+1)⊙σ′(zl) \delta^{l} = ((w^{l+1})^{T} \delta^{l+1}) \odot \sigma&#x27;(z^{l}) δl=((wl+1)Tδl+1)σ(zl)
    ∂C∂bjl=δjl \frac {\partial {C}} {\partial {b_{j}^{l}}} = \delta_{j}^{l} bjlC=δjl
    ∂C∂wjkl=akl−1δjl \frac {\partial{C}}{\partial {w_{jk}^{l}}} = a_{k}^{l-1} \delta_{j}^{l} wjklC=akl1δjl

一个简单的例子,这个例子里,只有一个输出:

import numpy as np

def sigmoid(x):
    """
    Calculate sigmoid
    """
    return 1/(1+np.exp(-x))

def sigmoid_prime(x):
    """
    # Derivative of the sigmoid function
    """
    return sigmoid(x) * (1 - sigmoid(x))

learnrate = 0.5
x = np.array([1, 2, 3, 4])
y = np.array(0.5)

# Initial weights
w = np.array([0.5, -0.5, 0.3, 0.1])

### Calculate one gradient descent step for each weight
# Calculate the node's linear combination of inputs and weights
h = np.dot(x, w)

# Calculate output of neural network
nn_output = sigmoid(h)

# Calculate error of neural network
error = y - nn_output

# Calculate the error term
#       Remember, this requires the output gradient, which we haven't
#       specifically added a variable for.
error_term = error * sigmoid_prime(h)

# Calculate change in weights
del_w = learnrate * error_term * x

print('Neural Network output:')
print(nn_output)
print('Amount of Error:')
print(error)
print('Change in Weights:')
print(del_w)
  1. 梯度下降算法:
    (1) 输入xxx,设置对应于输入层的激活函数a1a^{1}a1,这一步是初始化权重和偏差;
    (2) 向前传导,对于每一层l=2,3,...,Ll=2,3,...,Ll=2,3,...,L,计算zl=wlal−1+blz^{l}=w^{l}a^{l-1}+b^{l}zl=wlal1+bl
    al=σ(zl)a^{l}=\sigma(z^{l})al=σ(zl)
    (3) 计算输出误差δL=∇aC⊙σ′(zL)\delta^{L}=\nabla_{a}C \odot \sigma&#x27;(z^{L}) δL=aCσ(zL)
    (4) 向后传导误差,对于每一层l=L−1,L−2,...,2l=L-1,L-2, ..., 2l=L1,L2,...,2,计算
    δl=((wl+1)Tδl+1)⊙σ′(zl) \delta^{l} = ((w^{l+1})^{T} \delta^{l+1}) \odot \sigma&#x27;(z^{l}) δl=((wl+1)Tδl+1)σ(zl)
    (5) 输出成本函数对权重和偏差的偏导数:
    ∂C∂wjkl=akl−1δjl \frac {\partial C}{\partial w_{jk}^{l}} = a_{k}^{l-1} \delta_{j}^{l} wjklC=akl1δjl
    ∂C∂bjl=δjl \frac {\partial C}{\partial b_{j}^{l}} = \delta_{j}^{l} bjlC=δjl
    下面演示一个例子,根据学生的GRE和GPA成绩,以及本科学校的等级,这些作为输入,来预测一个学生是否被研究生院校所录取。训练数据可以从https://stats.idre.ucla.edu/stat/data/binary.csv获取。
# data_prep.py
import numpy as np
import pandas as pd

admissions = pd.read_csv('binary.csv')

# Make dummy variables for rank
data = pd.concat([admissions, pd.get_dummies(admissions['rank'], prefix='rank')], axis=1)
data = data.drop('rank', axis=1)

# Standarize features
for field in ['gre', 'gpa']:
    mean, std = data[field].mean(), data[field].std()
    data.loc[:,field] = (data[field]-mean)/std
    
# Split off random 10% of the data for testing
np.random.seed(21)
sample = np.random.choice(data.index, size=int(len(data)*0.9), replace=False)
data, test_data = data.ix[sample], data.drop(sample)

# Split into features and targets
features, targets = data.drop('admit', axis=1), data['admit']
features_test, targets_test = test_data.drop('admit', axis=1), test_data['admit']
# solution.py
import numpy as np
from data_prep import features, targets, features_test, targets_test

np.random.seed(21)

def sigmoid(x):
    """
    Calculate sigmoid
    """
    return 1 / (1 + np.exp(-x))


# Hyperparameters
n_hidden = 2  # number of hidden units
epochs = 900
learnrate = 0.005

n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights_input_hidden = np.random.normal(scale=1 / n_features ** .5,
                                        size=(n_features, n_hidden))
weights_hidden_output = np.random.normal(scale=1 / n_features ** .5,
                                         size=n_hidden)

for e in range(epochs):
    del_w_input_hidden = np.zeros(weights_input_hidden.shape)
    del_w_hidden_output = np.zeros(weights_hidden_output.shape)
    for x, y in zip(features.values, targets):
        ## Forward pass ##
        # TODO: Calculate the output
        hidden_input = np.dot(x, weights_input_hidden)
        hidden_output = sigmoid(hidden_input)

        output = sigmoid(np.dot(hidden_output,
                                weights_hidden_output))

        ## Backward pass ##
        # TODO: Calculate the network's prediction error
        error = y - output

        # TODO: Calculate error term for the output unit
        output_error_term = error * output * (1 - output)

        ## propagate errors to hidden layer

        # TODO: Calculate the hidden layer's contribution to the error
        hidden_error = np.dot(output_error_term, weights_hidden_output)

        # TODO: Calculate the error term for the hidden layer
        hidden_error_term = hidden_error * hidden_output * (1 - hidden_output)

        # TODO: Update the change in weights
        del_w_hidden_output += output_error_term * hidden_output
        del_w_input_hidden += hidden_error_term * x[:, None]

    # TODO: Update weights
    weights_input_hidden += learnrate * del_w_input_hidden / n_records
    weights_hidden_output += learnrate * del_w_hidden_output / n_records

    # Printing out the mean square error on the training set
    if e % (epochs / 10) == 0:
        hidden_output = sigmoid(np.dot(x, weights_input_hidden))
        out = sigmoid(np.dot(hidden_output,
                             weights_hidden_output))
        loss = np.mean((out - targets) ** 2)

        if last_loss and last_loss < loss:
            print("Train loss: ", loss, "  WARNING - Loss Increasing")
        else:
            print("Train loss: ", loss)
        last_loss = loss

# Calculate accuracy on test data
hidden = sigmoid(np.dot(features_test, weights_input_hidden))
out = sigmoid(np.dot(hidden, weights_hidden_output))
predictions = out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值