What is back propagation in an artificial neural network?
- Back propagation in an artificial neural network (ANN) is a method of training a network with hidden neurons (i.e. network with multiple hidden layers). In this method, using training data where input and output is known, the difference or error between desired output and actual output is computed and propagated back into the hidden layers of the neural network to adjust the node weights so as to bring the difference between desired and actual output down.[1]
What is the role of signal back propagation observed within some types of biological neurons?
- Learning in biological neurons, which essentially boils down to strengthening or weakening of connection strengths between neurons, involves, in some instances, a form of signal back propagation which assists in the strengthening or weakening of connection strengths. But this back propagating signal only influences the connection strengths between adjacent neurons unlike ANN where error back propagation is propagated across all layers of the network. When a biological neuron A is near enough to excite neuron B, the firing of neuron A triggers a back propagating signal within the excited neuron B, that serves to reinforce (or weaken) the connection strength between A and B. [2], [11]
Is there a biological equivalent to ANN back propagation?
- There have been many papers presenting biologically plausible mechanisms of ANN back propagation, while others arguing that it is biologically implausible. [3], [4] , [5] , [9],[10]
- One argument proposes ANN back propagation and a form of learning (Contrastive Hebbian Learning), presumed to be occurring in a part of the brain, may be equivalent in principle. However, this form of learning does not require back propagation of errors - the output is clamped at desired levels, and the effect of this clamping is allowed to spread through feedback connections across the entire network [6] ,[7] ,[8]
References
- R. Rojas Neural networks, Chapter 7, 1996 [ Open Access]. A detailed introduction to back propagation
- A synaptically controlled, associative signal for Hebbian plasticity in hippocampal neurons, Science 1997
- Frequency-based error back-propagation in a cortical network [ Open Access]
- A more biologically plausible learning rule than back propagation applied to a network model of cortical area 7A, PNAS 1991 [ Open Access]
- Biologically Plausible Error-driven Learning using Local Activation Differences: The Generalized Recirculation Algorithm, CMU, 1996 [ Open Access]
- Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network, Neural computation 2003 [ Open Access]
- Contrastive Hebbian learning [ Open Access]
- Conditional Routing of Information to the Cortex: A Model of the Basal Ganglia’s Role in Cognitive Coordination [ Open Access]
- Backpropagating action potentials in neurones: measurement, mechanisms and potential function,Progress in Biophysics and Molecular Biology, 2005 [ Open Access]
- Non-Hebbian spike-timing-dependent plasticity in cerebellar circuits, Neural Circuits 2012 [ Open Access]
- Regulation of Synaptic Efficacy by Coincidence of Postsynaptic APs and EPSPs, Science 1997 [ Open Access]
本文深入探讨了人工神经网络中的反向传播算法及其在训练多层隐层网络时的作用,同时比较了生物神经元学习过程中观察到的信号反向传播现象。分析了生物神经元学习与人工神经网络反向传播之间的相似性和差异性,并讨论了生物可等效的反向传播算法,如对比性海布学习。文章还提供了关于反向传播算法在不同技术领域的应用和其与生物过程的关联性概述。
20万+

被折叠的 条评论
为什么被折叠?



