这一段代码是整个过程的宏观表现

#method to train the neural network, given the training data (inputS) or the input time
#sequence of spikes for the input neurons and the expected spiking times (outputS)
@classmethod
def train(self, network, inputS, outputS, learningR, epochs):
global learningRate #学习率
learningRate = learningR
lenTimeSeq, inNeurons = inputS.shape #时间序列长度、
#this should be done for a number of epochs as well and at the end of each epoch the resetSpikeTimes
#method should be called #在每一次迭代的最后重置脉冲时间
print ('%%%%%%%%%%%%%%%%%%%%Start of simulation%%%%%%%%%%%%%%%%%%%%%%%%')
for e in range(epochs):
error = 0
inputS, outputS = DataProc.shuffleInUnison(inputS, outputS) #随机打乱inputS和outputS的下标
# network.displaySNN()
#inIndex represents the index of the spikes in the training data time sequence
for inIndex in range(lenTimeSeq):
inLayer = inputS[inIndex,:]
print ('The input layer is ', inLayer)
expSpikes = outputS[inIndex,:]
print ('The expected spikes are ', expSpikes)
predSpikes = np.zeros((lenTimeSeq))
print ('The forward propagation phase started *********')
predSpikes = self.forwardProp(network, inLayer, expSpikes) #前馈
print ('The predicted spike times are ++++++++ ', predSpikes)
scale = 10 if e < 200 else 1
network = self.backProp(network, expSpikes, inLayer) #反馈
network.resetSpikeTimeNet() #重置
predSpikes = self.forwardProp(network, inLayer, expSpikes) #反馈后又前馈一次
print ('The predicted spike times are ++++++++ ', predSpikes)
# network.displaySNN()
# network.resetSpikeTimeLayer(network.layers[-1])
error += self.errorFMSE(expSpikes, predSpikes) #计算误差
#the spikes should be reset after each example
network.resetSpikeTimeNet()
print ('The error is ', error)
if error == 0:
break
然后解析里面的语句 forwaidProp这个语句是关于层的前馈,这个属于先更新一层,然后根据更新的这一层把后面的所有层都更新一次。
#function to simulate a forward pass through the network
@classmethod
def forwardProp(self, network, inLayer, expSpikes): #前馈 这是整个网络的前馈,所以会以层的传输为主
global codingInterval #编码时间间隙
global timeStep #时间步长
noLayers = len(network.layers) #层数
# print 'The simulation time is ', time
time = 0 #仿真时间
while time <= codingInterval:
#compute the update for the first layer, using the input spikes
updatedLayer = self.forwardPropL(inLayer, network.layers[0], 0, time) #返回的是一个对象,是一个异步神经元类型
network.layers[0] = updatedLayer #更新层变成当前层
for layer in range(1,noLayers): #传递到最后一层
# print 'layer is ', layer, time
#check for updates for the layers of the network
updatedLayer = self.forwardPropL(network.layers[layer-1], network.layers[layer], layer, time)
network.layers[layer] = updatedLayer
time += timeStep
predSpikes = SNNetwork.getFireTimesLayer(network.layers[-1])

本文详细介绍了一种脉冲神经网络(SNN)的训练过程,包括前馈与反向传播算法的具体实现。针对SNN特有的脉冲发放机制,文章探讨了如何通过线性假设来解决反向传播过程中导数无法直接计算的问题。
最低0.47元/天 解锁文章





