深度学习笔记之线性回归

文章目录

线性回归

线性回归是利用数理统计中回归分析,来确定两种或两种以上变量间相互依赖的定量关系的一种统计分析方法,其表达形式为y = w’x+e。笔者这里简单记录下学习 PaddlePaddle 的笔记。文章所用的例子 百度AI Studio

样例

给定5个样例,每个样例有12个属性。根据线性回归输出指定值的预测值。

"""
This is a demo using PaddlePaddle to implement Linear Regression, so that it can
predict value according to specified number
"""
from paddle import fluid as fluid
import numpy as np


def main():
    """
    :return:
    """

    """
    Generates the training data and test data, respectively.
    """
    x_training = np.array([[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
                           [2.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
                           [3.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
                           [4.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
                           [5.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]) \
        .astype('float32')
    y_training = np.array([[3.0], [5.0], [7.0], [9.0], [11.0]]).astype('float32')
    x_testing = np.array([[6.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]) \
        .astype('float32')

    """
    Generates the input layer, the neural network with only one hidden layer and the output layer,
    respectively. The shape of input layer equals the number of properties of each sample.
    """
    x = fluid.layers.data(name='x', shape=[12], dtype='float32')
    hidden = fluid.layers.fc(input=x, size=100, act='relu')
    network = fluid.layers.fc(input=hidden, size=1, act=None)
    y = fluid.layers.data(name='y', shape=[1], dtype='float32')

    """
    Defines the cost function and average value of cost function
    """
    cost = fluid.layers.square_error_cost(input=network, label=y)
    avg_cost = fluid.layers.mean(cost)

    """
    Copies or clones the main program so that it can used to predict after finish training.
    """
    test_program = fluid.default_main_program().clone(for_test=True)

    """
    Definition of optimization method.
    """
    optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.01)
    optimizer.minimize(avg_cost)

    """
    Creates the CPU parser for training and initialize the parameters of neural network.
    """
    place = fluid.CPUPlace()
    exe = fluid.Executor(place)
    exe.run(program=fluid.default_startup_program())

    """
    Trains the neural network 1000 times and prints the value of cost function after
    specified-times training.
    """
    for index in range(1000):
        cost_train = exe.run(program=fluid.default_main_program(),
                             feed={'x': x_training, 'y': y_training},
                             fetch_list=[avg_cost])
        if index % 50 == 0:
            print('No: %d, Cost: %0.8f' % (index, cost_train[0]))

    """
    Predicts the value according to trained neural network. Though there is no need to add
    an output value for prediction, it still recommend to add one so as to keep the code style.
    """
    res = exe.run(program=test_program,
                  feed={'x': x_testing, 'y': np.array([[0.0]]).astype('float32')},
                  fetch_list=[network])
    print('x = %.2f, y = %.8f' % (x_testing[0][0], res[0][0][0]))


if __name__ == '__main__':
    main()

输出结果:

No: 0, Cost: 62.55745697
No: 50, Cost: 0.00815332
No: 100, Cost: 0.00285165
No: 150, Cost: 0.00100128
No: 200, Cost: 0.00035226
No: 250, Cost: 0.00012405
No: 300, Cost: 0.00004372
No: 350, Cost: 0.00001541
No: 400, Cost: 0.00000544
No: 450, Cost: 0.00000192
No: 500, Cost: 0.00000068
No: 550, Cost: 0.00000024
No: 600, Cost: 0.00000009
No: 650, Cost: 0.00000003
No: 700, Cost: 0.00000002
No: 750, Cost: 0.00000001
No: 800, Cost: 0.00000001
No: 850, Cost: 0.00000001
No: 900, Cost: 0.00000001
No: 950, Cost: 0.00000000
x = 6.00, y = 12.99991131
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值