Deep Learning|The Basic About Python and Linear Regression
Hello everyone!this is my first time to use 优快云 to write a article. To day I will show you something about Python and Linear Regression.
1.Function about yield()
this function is about circulation. when the program execute it, the program will leave the circulation and execute the next, the second time you join the circulation it will begin with yield(), this function use as generator.now we use a simple example to learn it.
Example: this program is about Pascal’s triangle
def triangles():#define function that can show the every line
p = [1]
while True:
yield p#generator函数与普通函数的差别:在执行过程中,遇到yield就中断,下次又继续执行
#generatot function is different from others:when you meet it the program will interrupt and will be execute the next time
p = [1] + [p[i] + p[i+1] for i in range(len(p)-1)] + [1]
define a function that can show every line about Pascal’s triangle, its principle fellow :
the first number always is 1 and the next number is the number on its shoulder plus together.
Get the first 10 lines about Pascal’s triangle:
n = 0
results = []
for t in triangles():
results.append(t)
n = n + 1
if n ==11 :#show the eleven lines triangle
break
for t in results:
print(t)
outcome:
2.Linear Regression
Linear Regression is the basic about Machine Learning:
Linear regression can be predicted by the distribution of data, and the function weight can be modified to fit.
import some libraries
from IPython import display
from matplotlib import pyplot as plt
from mxnet import autograd, nd
import random
build a database:
y=wx+b
num_inputs = 2
num_examples = 1000
true_w = [2, -3.4]
true_b = 4.2
features = nd.random.normal(scale=1, shape=(num_examples, num_inputs))
labels = true_w[0] * features[:, 0] + true_w[1] * features[:, 1] + true_b
labels += nd.random.normal(scale=0.01, shape=labels.shape)
print(features[0], labels[0])
Get samples
def data_iter(batch_size, features, labels):
num_examples = len(features)
indices = list(range(num_examples))
random.shuffle(indices) # 样本的读取顺序是随机的
for i in range(0, num_examples, batch_size):
j = nd.array(indices[i: min(i + batch_size, num_examples)])
yield features.take(j), labels.take(j) # take函数根据索引返回对应元素
batch_size=10
for X, y in data_iter(batch_size, features, labels):
print(X, y)
break
it can help you to get first 10 sample which was got randomly.
train sample and get fit weight:
w = nd.random.normal(scale=0.01, shape=(num_inputs, 1))
b = nd.zeros(shape=(1,))
def linreg(X, w, b):
return nd.dot(X, w) + b
def squared_loss(y_hat, y):
return (y_hat - y.reshape(y_hat.shape)) ** 2 / 2
def sgd(params, lr, batch_size):
for param in params:
param[:] = param - lr * param.grad / batch_size
linreg:the function expression
squared_loss: variance
sgd: Loss value
lr = 0.03
num_epochs = 3
net = linreg
loss = squared_loss
for epoch in range(num_epochs): # 训练模型一共需要num_epochs个迭代周期
# 在每一个迭代周期中,会使用训练数据集中所有样本一次(假设样本数能够被批量大小整除)。X
# 和y分别是小批量样本的特征和标签
for X, y in data_iter(batch_size, features, labels):
with autograd.record():
l = loss(net(X, w, b), y) # l是有关小批量X和y的损失
l.backward() # 小批量的损失对模型参数求梯度
sgd([w, b], lr, batch_size) # 使用小批量随机梯度下降迭代模型参数
train_l = loss(net(features, w, b), labels)
print('epoch %d, loss %f' % (epoch + 1, train_l.mean().asnumpy()))
Training examples obtains the weight and compare with we defined weight we can get it:
weight :1.999999 to 2 and -3.39999 to -3.4
outcome: 4.1996655 to 4.2
so This method works very well
It’s a pleasure to be here to share with you some of my learning experiences and I have many shortcomings as a beginner.There are some problems with the description of the article, after it I will improve myself as possible and share more articles in 优快云
written by Neio, a student of SCUT