[torch]nn内部函数?

本文详细介绍了使用Lua语言实现神经网络模块的过程,包括前向传播、反向传播等核心步骤。通过具体实例展示了如何更新输出、计算梯度及参数更新等关键操作。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

functions

https://bigaidream.gitbooks.io/subsets_ml_cookbook/content/dl/lua/lua_module.html

[output] forward(input)

Takes an input object, and computes the corresponding output of the module.

After a forward(), the output state variable should have been updated to the new state.

We do NOT override this function. Instead, we implement updateOutput(input)function. The forward module in the abstract parent class module will call updateOutput(input).

[gradInput] backward(input, gradOutput)
Performs a backpropagation step through the module, w.r.t. the given input.

A backpropagation step consists of computing two kind of gradients at input given gradOutput (gradients w.r.t. the output of the module). This function simply performs this task using two function calls:

a function call to updateGradInput(input, gradOutput)
a function call to accGradParameters(input, gradOutput)

We do NOT override this function call. We override updateGradInput(input, gradOutput) and accGradParameters(input, gradOutput) functions.

[output] updateOutput(input, gradOutput)
When defining a new module, this method should be overloaded.

Computes the output using the current parameter set of the class and input. This function returns the result which is stored in the output field.

[gradInput] updateGradInput(input, gradOutput)
When defining a new module, this method should be overloaded.

Computes the gradient of the module w.r.t. its own input. This is returned in gradInput. Also, the gradInput state variable is updated accordingly.

[gradInput] accGradParameters(input, gradOutput)
When defining a new module, this method should be overloaded, if the module has trainable parameters.

Computes the gradient of the module w.r.t. its own parameters. Many modules do NOT perform this step as they do NOT have any trainable parameters. The module is expected to accumulate the gradients w.r.t. the trainable parameters in some variables.

Zeroing this accumulation is achieved with zeroGradParameters() and updating the trainable parameters according to this accumulation is done with updateParameters().

summary

--[[
output = model:forward(input)
gradInput = model:backward(input, gradOutput)
--]]
out1 = model1:forward(input)
out2 = model2:forward(out1)
loss = criterion:forward(out2,label)
grad_out2 = criterion:backward(out2,label)
grad_out1 = model2:forward(out1,grad_out2)
grad_input = model1:forward(input,grad_out1)

practice

https://github.com/apsvvfb/VQA_jan

--train.lua
word_feat, img_feat, w_ques, w_img, mask = unpack(protos.word:forward({data.questions, new_data_images}))

dummy = protos.word:backward({data.questions,data.images}, {d_conv_feat, d_w_ques, d_w_img, d_conv_img, d_ques_img})
--misc/word_level.lua
function layer:updateOutput(input)
  local seq = input[1]
  local img = input[2]
  ...
  return {self.embed_output, self.img_feat, w_embed_ques, w_embed_img, self.mask}

function layer:updateGradInput(input, gradOutput)
  local seq = input[1]
  local img = input[2]
  ...
  return self.gradInput
end
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值