Udacity机器学习笔记——深度学习(2)
感知器
- 感知器或者神经元是神经网络的基础单元,它们对输入的数据进行判断,比如说输入一个学生的学业成绩和考试成绩,然后感知器根据这两个值来判断该学生是否被某大学录取。那么,感知器是根据什么规则来对这两个值进行比较从而得出结论的呢?感知器更加关注学生的学业成绩还是考试成绩呢?这里就需要引入权重的概念。
- 分别引入两个权重,分别于学业成绩和考试成绩进行相乘,权重越大,那么说明对应的成绩也就更加重要。一开始,这两个权重是随机的,那么感知器训练通过学习,基于上一次的分类结果的误差不断地调整权重,从而获知什么样的成绩会被大学所录取。用数学符号表示就是:
wgrades⋅xgrades+wtest⋅xtest w_{grades} \cdot x_{grades} + w_{test} \cdot x_{test} wgrades⋅xgrades+wtest⋅xtest
如果由m个输入,相应地得到下面的式子:
∑i=1mwi⋅xi \sum_{i=1}^{m} w_{i} \cdot x_{i} i=1∑mwi⋅xi - 最后,上面的相加式子变成一个输出结果,通过将该式子作为一个激活函数的输入而得到。一个最简单的激活函数就是Heaviside step function,阶跃函数:
f(h)={0if h<0,1if h≥0. f(h) = \begin{cases} 0& \quad \text {if $h<0$}, \\ 1& \quad \text {if $h \ge 0$}. \end {cases} f(h)={01if h<0,if h≥0.
将上面介绍的相加式子带入该函数,并且引入偏差,可以得到感知器的公式:
f(x1,x2,...,xm)={0if b+∑wi⋅xi<0,1if b+∑wi⋅xi≥0. f(x_{1}, x_{2}, ..., x_{m}) = \begin{cases} 0& \quad \text {if $b+\sum w_{i} \cdot x_{i} < 0$}, \\ 1& \quad \text {if $b+\sum w_{i} \cdot x_{i} \ge 0$}. \end {cases} f(x1,x2,...,xm)={01if b+∑wi⋅xi<0,if b+∑wi⋅xi≥0. - 根据感知器的公式,我们可以推断出一组适合AND神经元的权重和偏差。例如,当两个输入值都为1时,可以设置w1w_{1}w1和w2w_{2}w2分别为 1,设置bbb为-2,那么仅当两个输入值都为1的情况,才可以得到b+∑wi⋅xi≥0b+\sum w_{i} \cdot x_{i} \ge 0b+∑wi⋅xi≥0的结果,此时式子等于0。
import pandas as pd
weight1 = 1.5
weight2 = 1.0
bias = -2.0
test_inputs = [(0,0), (0,1), (1,0), (1,1)]
correct_outputs = [False, False, False, True]
outputs = []
for test_input, correct_output in zip(test_inputs, correct_outputs):
linear_combination = weight1*test_input[0] + weight2*test_input[1]+bias
output = int(linear_combination >= 0)
is_correct_string = 'Yes' if output == correct_output else 'No'
outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])
num_wrong = len([output[4] for output in outputs if output[4] == 'No'])
output_frame = pd.DataFrame(outputs, columns=['Input 1', 'Input 2', 'Linear Combination', 'Activation Output', 'Is Correct'])
if not num_wrong:
print('Nice! You got it all correct. \n')
else:
print('You got {} wrong. Keep trying! \n'.format(num_wrong))
print(output_frame.to_string(index=False))
通过修改weight1和weight2,以及bias的值,还有correct_outputs的值,可以相应地得到OR和NOT,以及XOR神经元。
梯度下降
- 梯度下降是帮助寻找到最小化成本函数的权重和偏重的方法。定义成本函数为:
C(w,b)≡12n∑x∥y(x)−a∥2 C(w,b) \equiv \frac{1} {2n} \sum_{x} \| y(x)-a \|^{2} C(w,b)≡2n1x∑∥y(x)−a∥2
假设v=(w,b)v=(w,b)v=(w,b)
ΔC≈∇C⋅Δv \Delta C \approx \nabla C \cdot \Delta vΔC≈∇C⋅Δv
Δv=−η∇C \Delta v = - \eta \nabla C Δv=−η∇C
η\etaη称之为学习率,是个小的正数。
从而可以得到:
v→v′=v−η∇C v \rightarrow v'= v - \eta \nabla C v→v′=v−η∇C
展开可以得到:
wk→wk′=wk−η∂C∂wk w_{k} \rightarrow w'_{k} = w_{k} - \eta \frac {\partial{C}} {\partial w_{k}} wk→wk′=wk−η∂wk∂C
bl→bl′=bl−η∂C∂bl b_{l} \rightarrow b'_{l} = b_{l} - \eta \frac {\partial{C}} {\partial b _{l}} bl→bl′=bl−η∂bl∂C
定义误差:
δjl≡∂C∂zjl \delta_{j}^{l} \equiv \frac {\partial {C}} {\partial {z_{j}^{l}} }δjl≡∂zjl∂C
其中zjl=∑kwjklakl−1+bjl z_{j}^{l}=\sum_{k} w_{jk}^{l} a_{k}^{l-1} + b_{j}^{l} zjl=k∑wjklakl−1+bjl 表示神经网络第lll层第jjj个神经元的输入,它是对所有(l−1)(l-1)(l−1)层的所有神经元输出akl−1a_{k}^{l-1}akl−1进行求和。
进一步,可以得到以下四个等式:
δL=∇aC⊙σ′(zL) \delta^{L} = \nabla_{a} C \odot \sigma'(z^{L}) δL=∇aC⊙σ′(zL)
δl=((wl+1)Tδl+1)⊙σ′(zl) \delta^{l} = ((w^{l+1})^{T} \delta^{l+1}) \odot \sigma'(z^{l}) δl=((wl+1)Tδl+1)⊙σ′(zl)
∂C∂bjl=δjl \frac {\partial {C}} {\partial {b_{j}^{l}}} = \delta_{j}^{l} ∂bjl∂C=δjl
∂C∂wjkl=akl−1δjl \frac {\partial{C}}{\partial {w_{jk}^{l}}} = a_{k}^{l-1} \delta_{j}^{l} ∂wjkl∂C=akl−1δjl
一个简单的例子,这个例子里,只有一个输出:
import numpy as np
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1/(1+np.exp(-x))
def sigmoid_prime(x):
"""
# Derivative of the sigmoid function
"""
return sigmoid(x) * (1 - sigmoid(x))
learnrate = 0.5
x = np.array([1, 2, 3, 4])
y = np.array(0.5)
# Initial weights
w = np.array([0.5, -0.5, 0.3, 0.1])
### Calculate one gradient descent step for each weight
# Calculate the node's linear combination of inputs and weights
h = np.dot(x, w)
# Calculate output of neural network
nn_output = sigmoid(h)
# Calculate error of neural network
error = y - nn_output
# Calculate the error term
# Remember, this requires the output gradient, which we haven't
# specifically added a variable for.
error_term = error * sigmoid_prime(h)
# Calculate change in weights
del_w = learnrate * error_term * x
print('Neural Network output:')
print(nn_output)
print('Amount of Error:')
print(error)
print('Change in Weights:')
print(del_w)
- 梯度下降算法:
(1) 输入xxx,设置对应于输入层的激活函数a1a^{1}a1,这一步是初始化权重和偏差;
(2) 向前传导,对于每一层l=2,3,...,Ll=2,3,...,Ll=2,3,...,L,计算zl=wlal−1+blz^{l}=w^{l}a^{l-1}+b^{l}zl=wlal−1+bl
al=σ(zl)a^{l}=\sigma(z^{l})al=σ(zl)
(3) 计算输出误差δL=∇aC⊙σ′(zL)\delta^{L}=\nabla_{a}C \odot \sigma'(z^{L}) δL=∇aC⊙σ′(zL)
(4) 向后传导误差,对于每一层l=L−1,L−2,...,2l=L-1,L-2, ..., 2l=L−1,L−2,...,2,计算
δl=((wl+1)Tδl+1)⊙σ′(zl) \delta^{l} = ((w^{l+1})^{T} \delta^{l+1}) \odot \sigma'(z^{l}) δl=((wl+1)Tδl+1)⊙σ′(zl)
(5) 输出成本函数对权重和偏差的偏导数:
∂C∂wjkl=akl−1δjl \frac {\partial C}{\partial w_{jk}^{l}} = a_{k}^{l-1} \delta_{j}^{l} ∂wjkl∂C=akl−1δjl
∂C∂bjl=δjl \frac {\partial C}{\partial b_{j}^{l}} = \delta_{j}^{l} ∂bjl∂C=δjl
下面演示一个例子,根据学生的GRE和GPA成绩,以及本科学校的等级,这些作为输入,来预测一个学生是否被研究生院校所录取。训练数据可以从https://stats.idre.ucla.edu/stat/data/binary.csv获取。
# data_prep.py
import numpy as np
import pandas as pd
admissions = pd.read_csv('binary.csv')
# Make dummy variables for rank
data = pd.concat([admissions, pd.get_dummies(admissions['rank'], prefix='rank')], axis=1)
data = data.drop('rank', axis=1)
# Standarize features
for field in ['gre', 'gpa']:
mean, std = data[field].mean(), data[field].std()
data.loc[:,field] = (data[field]-mean)/std
# Split off random 10% of the data for testing
np.random.seed(21)
sample = np.random.choice(data.index, size=int(len(data)*0.9), replace=False)
data, test_data = data.ix[sample], data.drop(sample)
# Split into features and targets
features, targets = data.drop('admit', axis=1), data['admit']
features_test, targets_test = test_data.drop('admit', axis=1), test_data['admit']
# solution.py
import numpy as np
from data_prep import features, targets, features_test, targets_test
np.random.seed(21)
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1 / (1 + np.exp(-x))
# Hyperparameters
n_hidden = 2 # number of hidden units
epochs = 900
learnrate = 0.005
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights_input_hidden = np.random.normal(scale=1 / n_features ** .5,
size=(n_features, n_hidden))
weights_hidden_output = np.random.normal(scale=1 / n_features ** .5,
size=n_hidden)
for e in range(epochs):
del_w_input_hidden = np.zeros(weights_input_hidden.shape)
del_w_hidden_output = np.zeros(weights_hidden_output.shape)
for x, y in zip(features.values, targets):
## Forward pass ##
# TODO: Calculate the output
hidden_input = np.dot(x, weights_input_hidden)
hidden_output = sigmoid(hidden_input)
output = sigmoid(np.dot(hidden_output,
weights_hidden_output))
## Backward pass ##
# TODO: Calculate the network's prediction error
error = y - output
# TODO: Calculate error term for the output unit
output_error_term = error * output * (1 - output)
## propagate errors to hidden layer
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(output_error_term, weights_hidden_output)
# TODO: Calculate the error term for the hidden layer
hidden_error_term = hidden_error * hidden_output * (1 - hidden_output)
# TODO: Update the change in weights
del_w_hidden_output += output_error_term * hidden_output
del_w_input_hidden += hidden_error_term * x[:, None]
# TODO: Update weights
weights_input_hidden += learnrate * del_w_input_hidden / n_records
weights_hidden_output += learnrate * del_w_hidden_output / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
hidden_output = sigmoid(np.dot(x, weights_input_hidden))
out = sigmoid(np.dot(hidden_output,
weights_hidden_output))
loss = np.mean((out - targets) ** 2)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
# Calculate accuracy on test data
hidden = sigmoid(np.dot(features_test, weights_input_hidden))
out = sigmoid(np.dot(hidden, weights_hidden_output))
predictions = out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))