paddle 基础函数 batch_norm

本文展示了如何使用 PaddlePaddle 的批归一化 (Batch Normalization) 层进行神经网络训练。通过示例代码,介绍了创建数据输入、全连接层以及批归一化层的方法,并演示了如何运行程序获取输出结果。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

官方文档:https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/layers_cn/batch_norm_cn.html

 

示例:

import paddle.fluid as fluid
import numpy as np

#x = fluid.layers.data(name='x', shape=[3, 7, 3, 7], dtype='float32', append_batch_size=False)
x = fluid.data(name='x', shape=[3, 7, 3, 7], dtype='float32')

hidden1 = fluid.layers.fc(input=x, size=13)
print(hidden1.shape)

param_attr = fluid.ParamAttr(name='batch_norm_w', initializer=fluid.initializer.Constant(value=1.0))
bias_attr = fluid.ParamAttr(name='batch_norm_b', initializer=fluid.initializer.Constant(value=0.0))
hidden2 = fluid.layers.batch_norm(input=hidden1, param_attr = param_attr, bias_attr = bias_attr)
print(hidden2.shape)

place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())

np_x = np.random.random(size=(3, 7, 3, 7)).astype('float32')
output = exe.run(feed={"x": np_x}, fetch_list = [hidden1, hidden2])
print()
print("hidden1: \n", output[0])
print()
print("hidden2: \n", output[1])

官方文档 https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/layers_cn/data_cn.html#data 中不推荐使用 paddle.fluid.layers.data ,因其在之后的版本中会被删除。请使用 paddle.fluid.data 。

 

结果:

(3, 13)
(3, 13)

hidden1: 
 [[-0.8041223  -1.2716777   1.1149782   1.0641187  -2.2813084   0.47619045
  -0.08511591 -1.7405936   1.6110164  -0.23074637 -0.8924869  -1.9198824
  -0.07357906]
 [-0.6901268  -1.1901046   0.45025584  0.84378    -2.0559185   1.2483323
   0.056899   -1.8554276   0.46756285 -0.46362847 -0.4724779  -2.5575163
  -0.31322247]
 [-0.19852373 -0.6608075   1.0717577   1.2933227  -0.87994194  1.0412531
  -0.08091754 -0.39610723  0.1924642  -1.4235746  -0.98897517 -1.8628683
   0.17619817]]

hidden2: 
 [[-0.912776   -0.85229826  0.7770314  -0.01609802 -0.88253903 -1.3658669
  -0.7378371  -0.6186631   1.3903538   0.92054915 -0.480901    0.6146097
  -0.01690283]
 [-0.47897983 -0.55108404 -1.411748   -1.2164345  -0.515707    1.0002103
   1.4121153  -0.7919893  -0.47124022  0.46944898  1.392081   -1.4102736
  -1.2160527 ]
 [ 1.3917558   1.4033837   0.63471603  1.232533    1.3982459   0.3656566
  -0.67427826  1.4106526  -0.9191133  -1.389998   -0.9111798   0.7956648
   1.2329556 ]]

 

class UNetEx(nn.Layer): def __init__(self, in_channels, out_channels, kernel_size=3, filters=[16, 32, 64], layers=3, weight_norm=True, batch_norm=True, activation=nn.ReLU, final_activation=None): super().__init__() assert len(filters) > 0 self.final_activation = final_activation self.encoder = create_encoder(in_channels, filters, kernel_size, weight_norm, batch_norm, activation, layers) decoders = [] for i in range(out_channels): decoders.append(create_decoder(1, filters, kernel_size, weight_norm, batch_norm, activation, layers)) self.decoders = nn.Sequential(*decoders) def encode(self, x): tensors = [] indices = [] sizes = [] for encoder in self.encoder: x = encoder(x) sizes.append(x.shape) tensors.append(x) x, ind = F.max_pool2d(x, 2, 2, return_mask=True) indices.append(ind) return x, tensors, indices, sizes def decode(self, _x, _tensors, _indices, _sizes): y = [] for _decoder in self.decoders: x = _x tensors = _tensors[:] indices = _indices[:] sizes = _sizes[:] for decoder in _decoder: tensor = tensors.pop() size = sizes.pop() ind = indices.pop() # 反池化操作,为上采样 x = F.max_unpool2d(x, ind, 2, 2, output_size=size) x = paddle.concat([tensor, x], axis=1) x = decoder(x) y.append(x) return paddle.concat(y, axis=1) def forward(self, x): x, tensors, indices, sizes = self.encode(x) x = self.decode(x, tensors, indices, sizes) if self.final_activation is not None: x = self.final_activation(x) return x 不修改上述神经网络的encoder和decoder的生成方式,用嘴少量的代码实现attention机制,在上述代码里修改。
05-10
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值