脉冲神经网络BP

本文详细介绍了一种脉冲神经网络(SNN)的训练过程,包括前馈与反向传播算法的具体实现。针对SNN特有的脉冲发放机制,文章探讨了如何通过线性假设来解决反向传播过程中导数无法直接计算的问题。

这一段代码是整个过程的宏观表现
在这里插入图片描述

#method to train the neural network, given the training data (inputS) or the input time
#sequence of spikes for the input neurons and the expected spiking times (outputS)
@classmethod
def train(self, network, inputS, outputS, learningR, epochs):
   global learningRate    #学习率
   learningRate = learningR
   lenTimeSeq, inNeurons = inputS.shape      #时间序列长度、


   #this should be done for a number of epochs as well and at the end of each epoch the resetSpikeTimes
   #method should be called      #在每一次迭代的最后重置脉冲时间


   print ('%%%%%%%%%%%%%%%%%%%%Start of simulation%%%%%%%%%%%%%%%%%%%%%%%%')


   for e in range(epochs):
      error = 0
      inputS, outputS = DataProc.shuffleInUnison(inputS, outputS)          #随机打乱inputS和outputS的下标
      # network.displaySNN()
      #inIndex represents the index of the spikes in the training data time sequence
      for inIndex in range(lenTimeSeq):
         inLayer = inputS[inIndex,:]
         print ('The input layer is ', inLayer)
         expSpikes = outputS[inIndex,:]
         print ('The expected spikes are ', expSpikes)
         predSpikes = np.zeros((lenTimeSeq))


         print ('The forward propagation phase started *********')
         predSpikes = self.forwardProp(network, inLayer, expSpikes) #前馈
         print ('The predicted spike times are ++++++++ ', predSpikes)


         scale = 10 if e < 200 else 1


         network = self.backProp(network, expSpikes, inLayer)   #反馈


         network.resetSpikeTimeNet()       #重置


         predSpikes = self.forwardProp(network, inLayer, expSpikes) #反馈后又前馈一次
         print ('The predicted spike times are ++++++++ ', predSpikes)
         # network.displaySNN()
         # network.resetSpikeTimeLayer(network.layers[-1])
         error += self.errorFMSE(expSpikes, predSpikes)    #计算误差


         #the spikes should be reset after each example
         network.resetSpikeTimeNet()


      print ('The error is ', error)
      if error == 0:
         break

然后解析里面的语句 forwaidProp这个语句是关于层的前馈,这个属于先更新一层,然后根据更新的这一层把后面的所有层都更新一次。

#function to simulate a forward pass through the network
@classmethod
def forwardProp(self, network, inLayer, expSpikes):       #前馈    这是整个网络的前馈,所以会以层的传输为主
   global codingInterval  #编码时间间隙
   global timeStep          #时间步长
   noLayers = len(network.layers)    #层数


   # print 'The simulation time is ', time
   time = 0      #仿真时间


   while time <= codingInterval:
      #compute the update for the first layer, using the input spikes
      updatedLayer = self.forwardPropL(inLayer, network.layers[0], 0, time)  #返回的是一个对象,是一个异步神经元类型
      network.layers[0] = updatedLayer   #更新层变成当前层


      for layer in range(1,noLayers):       #传递到最后一层
         # print 'layer is ', layer, time
         #check for updates for the layers of the network
         updatedLayer = self.forwardPropL(network.layers[layer-1], network.layers[layer], layer, time)
         network.layers[layer] = updatedLayer
      time += timeStep


   predSpikes = SNNetwork.getFireTimesLayer(network.layers[-1])
评论 4
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值