matlab-神经网络-感知机(6)

本文详细介绍了如何使用感知机神经网络学习AND运算。通过调整权重和阈值,实现输入向量的分类,最终达到最小化平均绝对误差性能的目标。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

 

%控制感知机的学习过程,学习AND运算
P=[0 1 0 1 1;1 1 1 0 0];
T=[0 1 0 0 0];
net = newp([0 1;0 1],1);
net=init(net);

y=sim(net,P);
e=T-y;
while (mae(e)>0.0015)
   dw=learnp(w,P,[],[],[],[],e,[],[],[],[],[])
   db=learnp(b,ones(1,5),[],[],[],[],e,[],[],[],[],[])
   %每次学习完后,会返回需要的调整权值矩阵和阈值矩阵
   w=w+dw
   b=b+db
   net.iw{1,1}=w
   net.b{1}=b  
   y=sim(net,P);
   e=T-y
end

 

 

learnp用于感知器神经网络权值和阈值的学习,学习规则是调整网络的权值和阈值,使网络平均绝对误差性能最小,以便实现输入向量的分类

help learnp
 LEARNP Perceptron weight/bias learning function.
 
   Syntax
  
     [dW,LS] = learnp(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
     [db,LS] = learnp(b,ones(1,Q),Z,N,A,T,E,gW,gA,D,LP,LS)
     info = learnp(code)
 
   Description
 
     LEARNP is the perceptron weight/bias learning function.
 
     LEARNP(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
       W  - SxR weight matrix (or b, an Sx1 bias vector).
       P  - RxQ input vectors (or ones(1,Q)).
       Z  - SxQ weighted input vectors.
       N  - SxQ net input vectors.
       A  - SxQ output vectors.
       T  - SxQ layer target vectors.
       E  - SxQ layer error vectors.
       gW - SxR gradient with respect to performance.
       gA - SxQ output gradient with respect to performance.
       D  - SxS neuron distances.
       LP - Learning parameters, none, LP = [].
       LS - Learning state, initially should be = [].
     and returns,
       dW - SxR weight (or bias) change matrix.
       LS - New learning state.
 
     LEARNP(CODE) returns useful information for each CODE string:
       'pnames'    - Returns names of learning parameters.
       'pdefaults' - Returns default learning parameters.
       'needg'     - Returns 1 if this function uses gW or gA.
 
   Examples
 
     Here we define a random input P and error E to a layer
     with a 2-element input and 3 neurons.
 
       p = rand(2,1);
       e = rand(3,1);
 
     Since LEARNP only needs these values to calculate a weight
     change (see Algorithm below), we will use them to do so.
 
       dW = learnp([],p,[],[],[],[],e,[],[],[],[],[])
 
   Network Use
 
     You can create a standard network that uses LEARNP with NEWP.
 
     To prepare the weights and the bias of layer i of a custom network
     to learn with LEARNP:
     1) Set NET.trainFcn to 'trainb'.
        (NET.trainParam will automatically become TRAINB's default parameters.)
     2) Set NET.adaptFcn to 'trains'.
        (NET.adaptParam will automatically become TRAINS's default parameters.)
     3) Set each NET.inputWeights{i,j}.learnFcn to 'learnp'.
        Set each NET.layerWeights{i,j}.learnFcn to 'learnp'.
        Set NET.biases{i}.learnFcn to 'learnp'.
        (Each weight and bias learning parameter property will automatically
        become the empty matrix since LEARNP has no learning parameters.)
 
     To train the network (or enable it to adapt):
     1) Set NET.trainParam (NET.adaptParam) properties to desired values.
     2) Call TRAIN (ADAPT).
 
     See NEWP for adaption and training examples.
 
   Algorithm
 
     LEARNP calculates the weight change dW for a given neuron from the
     neuron's input P and error E according to the perceptron learning rule:
 
       dw =  0,  if e =  0
          =  p', if e =  1
          = -p', if e = -1
 
     This can be summarized as:
 
       dw = e*p'

 

 

 

 

 

 

 

 

 

 

>> plotpv(P,T)
>> plotpc(net.iw{1,1},net.b{1})



 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值