matlab-神经网络-感知器(4)

本文详细介绍了如何使用train函数训练感知器,并通过多个示例展示了训练过程及结果。重点探讨了网络调整和训练函数参数的作用,以及如何通过仿真验证训练效果。实验包括AND门与OR门的实现,展示感知器在不同逻辑运算上的应用。

训练感知器,使用train,重复得将一组向量应用到一个网络上,每次更新网络,直到达到准则

>> P

P =

     0     1     0     1     1
     1     1     1     0     0

>> T

T =

     0     1     0     0     0

 

关于newp的说明

 

Define a sequence of targets T (together P and T define the operation of an AND gate), and then let the network adapt for 10 passes through the sequence. Then simulate the updated network.
T1 = {0 0 0 1};
net.adaptParam.passes = 10;
net = adapt(net,P1,T1);
Y = sim(net,P1)

 

Now define a new problem, an OR gate, with batch inputs P and targets T.
P2 = [0 0 1 1; 0 1 0 1];
T2 = [0 1 1 1];

 

Here you initialize the perceptron (resulting in new random weight and bias values), simulate its output, train for a maximum of 20 epochs, and then simulate it again.
net = init(net);
Y = sim(net,P2)
net.trainParam.epochs = 20;
net = train(net,P2,T2);
Y = sim(net,P2)


Notes
 

Perceptrons can classify linearly separable classes in a finite amount of time. If input vectors have large variances in their lengths, learnpn can be faster than learnp.

 

 >> net = newp(P,T)

net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: 'calcgrad'
           initFcn: 'initlay'
        performFcn: 'mae'
          plotFcns: {'plotperform','plottrainstate'}
          trainFcn: 'trainc'

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: .show, .showWindow, .showCommandLine, .epochs,
                    .goal, .time

    weight and bias values:

                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:

              name: ''
          userdata: (user information)

>> y=sim(net,P)

y =

     1     1     1     1     1

>>

经过仿真可以发现结果非常不理想,因为权值和阈值都是默认的0,我们看下调整和训练的函数的相关参数

>> help(net.adaptFcn)
 TRAINS Sequential order incremental training w/learning functions.
 
   Syntax
 
     [net,TR,Ac,El] = trains(net,Pd,Tl,Ai,Q,TS,VV,TV)
     info = trains(code)
 
   Description
 
     TRAINS is not called directly.  Instead it is called by TRAIN for
     network's whose NET.trainFcn property is set to 'trains'.
 
     TRAINS trains a network with weight and bias learning rules with
     sequential updates. The sequence of inputs is presented to the network
     with updates occurring after each time step.
 
     This incremental training algorithm is commonly used for adaptive
     applications.
 
     TRAINS takes these inputs:
       NET - Neural network.
       Pd  - Delayed inputs.
       Tl  - Layer targets.
       Ai  - Initial input conditions.
       Q   - Batch size.
       TS  - Time steps.
       VV  - Ignored.
       TV  - Ignored.
     and after training the network with its weight and bias
     learning functions returns:
       NET - Updated network.
       TR  - Training record.
             TR.timesteps - Number of time steps.
             TR.perf - performance for each time step.
       Ac  - Collective layer outputs.
       El  - Layer errors.
 
     Training occurs according to the TRAINS' training parameter
     shown here with its default value:
       net.trainParam.passes    1  Number of times to present sequence
 
     Dimensions for these variables are:
       Pd - NoxNixTS cell array, each element P{i,j,ts} is a ZijxQ matrix.
       Tl - NlxTS cell array, each element P{i,ts} is an VixQ matrix or [].
       Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
       Ac - Nlx(LD+TS) cell array, each element Ac{i,k} is an SixQ matrix.
       El - NlxTS cell array, each element El{i,k} is an SixQ matrix or [].
     Where
       Ni = net.numInputs
       Nl = net.numLayers
       LD = net.numLayerDelays
       Ri = net.inputs{i}.size
       Si = net.layers{i}.size
       Vi = net.targets{i}.size
       Zij = Ri * length(net.inputWeights{i,j}.delays)
 
     TRAINS(CODE) return useful information for each CODE string:
       'pnames'    - Names of training parameters.
       'pdefaults' - Default training parameters.
 
   Network Use
 
     You can create a standard network that uses TRAINS for adapting
     by calling NEWP or NEWLIN.
 
     To prepare a custom network to adapt with TRAINS:
     1) Set NET.adaptFcn to 'trains'.
        (This will set NET.adaptParam to TRAINS' default parameters.)
     2) Set each NET.inputWeights{i,j}.learnFcn to a learning function.
        Set each NET.layerWeights{i,j}.learnFcn to a learning function.
        Set each NET.biases{i}.learnFcn to a learning function.
        (Weight and bias learning parameters will automatically be
        set to default values for the given learning function.)
 
     To allow the network to adapt:
     1) Set weight and bias learning parameters to desired values.
     2) Call ADAPT.
 
     See NEWP and NEWLIN for adaption examples.
 
   Algorithm
 
     Each weight and bias is updated according to its learning function
     after each time step in the input sequence.

 

 net.adaptParam.passes表示现有队列在网络的训练次数,自适应调整次数

 

 

 net.trainParam.epochs表示最多训练次数

>> net.trainParam.epochs = 20

>> net=train(net,P,T)

net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: 'calcgrad'
           initFcn: 'initlay'
        performFcn: 'mae'
          plotFcns: {'plotperform','plottrainstate'}
          trainFcn: 'trainc'

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: .show, .showWindow, .showCommandLine, .epochs,
                    .goal, .time, .passes

    weight and bias values:

                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:

              name: ''
          userdata: (user information)

>>

训练完毕,仿真(simulate)一下看看


>> y=sim(net,P)

y =

     0     1     0     0     0

>> T

T =

     0     1     0     0     0

>>

效果很好,没有误差


>> y=sim(net,[1;1])

y =

     1

>> y=sim(net,[1;0])

y =

     0

>> plotpv(P,T)
>>

 

评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值