用pytorch构建神经网络

该博客介绍了如何在PyTorch中构建神经网络。首先,详细阐述了数据预处理的步骤,包括对类型变量进行one-hot编码和数值类型的数据标准化。接着,展示了如何手动编写一个简单的神经网络,并进一步讲解如何利用PyTorch库便捷地构建和序列化神经网络模型。

数据预处理

在神经网络模型中之前,要对数据进行一系列的预处理,如果是类型变量,可使用one-hot编码;对于数值类型,可进行标准化,让其属性值在0左右波动

import torch
import numpy as np
import pandas as pd
from torch.autograd import Variable

#输入特征,最后一列为分类
feature = [[200,6975,1],
           [800,56797,0],
           [400,45875,1],
           [200,59245,0],
           [300,469372,1],
           [500,32467,1],
           [700,183481,1]
          ]
feature = pd.DataFrame(feature)

#标准化    
for each in feature.columns[:-1]:
    mean, std = feature[each].mean(), feature[each].std()
    feature.loc[:, each] = (feature[each] - mean)/std

手动编写一个神经网络

#手动编写
input_size = 2
hidden_size = 28
outpu_size = 1
batch_size = 4

w1 = Variable(torch.randn([input_size,hidden_size]),requires_grad = True)
b1 = Variable(torch.randn(hidden_size),requires_grad = True)
w2 = Variable(torch.randn([hidden_size,outpu_size]),requires_grad = True)

def neu(x):
    hidden = x.mm(w1)+b1.expand(x.shape[0],hidden_size)
    hidden = torch.sigmoid(hidden)
    output = hidden.mm(w2)
    
    return output

def cost(x,y):
    error = torch.mean((x-y)**2)
    return error

def zero_grad():
    if w1.grad is not None and b1.grad is not None and w2.grad is not None:
        w1.grad.data.zero_()
        w2.grad.data.zero_()
        b1.grad.data.zero_()
        
def optimizer(lr):
    w1.data.add_(-lr*w1.grad.data)
    w2.data.add_(-lr*w2.grad.data)
    b1.data.add_(-lr*b1.grad.data)

for start in range(0,len(feature.index),batch_size):
    end = start+batch_size if start+batch_size<len(feature.index) else len(feature.index)
    x = Variable(torch.FloatTensor(feature.iloc[start:end,:-1].values))
    y = Variable(torch.FloatTensor(feature.iloc[start:end,2].values))
    pre = neu(x)
    batch_loss = []
    loss = cost(pre,y)
    zero_grad()
    loss.backward()
    optimizer(0.0001)
    batch_loss.append(loss.data.numpy())
    print(batch_loss)
    
new_x = torch.FloatTensor([[0.324,-0.34431]])
new_y = neu(new_x)
print(new_y)

调用pytorch构建序列化神经网络

input_size = 2
hidden_size = 28
output_size = 1
batch_size = 4
neu = torch.nn.Sequential(
    torch.nn.Linear(input_size, hidden_size),
    torch.nn.Sigmoid(),
    torch.nn.Linear(hidden_size, output_size),
)
cost = torch.nn.MSELoss()
optimizer = torch.optim.SGD(neu.parameters(), lr = 0.01)

losses = []
for i in range(1000):
    batch_loss = []
    for start in range(0, len(feature.index), batch_size):
        end = start + batch_size if start + batch_size < len(feature.index) else len(feature.index)
        x = Variable(torch.FloatTensor(feature.iloc[start:end,:-1].values))
        y = Variable(torch.FloatTensor(feature.iloc[start:end,2].values))
        pre = neu(x)
        loss = cost(pre, y)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        batch_loss.append(loss.data.numpy())

    # 每隔100步输出一下损失值(loss)
    if i % 100==0:
        losses.append(np.mean(batch_loss))
        print(i, np.mean(batch_loss))
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值