Neural Network Week 1 & Week 2

部署运行你感兴趣的模型镜像

1. Different types of neurons

  • Linear neurons
  • Binary threshold neurons
  • Recitified linear neurons
  • sigmoid neurons
  • Stochastic binary neurons

 

2. Reinforcement learning

Learn to select an action to maximize payoff.

– The goal in selecting each action is to maximize the expected sum
of the future rewards.
– We usually use a discount factor for delayed rewards so that we
don’t have to look too far into the future.

Reinforcement learning is difficult:

– The rewards are typically delayed so its hard to know where we
went wrong (or right).
– A scalar reward does not supply much information.

 

3. Main types of neural networks architecture

  • Feed-forward  
    • The first layer is the input and the last layer is the output
    • They compute a series of transformations that change the similarities between cases
    • The activities of the neurons in each layer are a non-linear function of the activities in the layer below

  

  • Recurrent
    • They have directed cycles in their connection graph
    • They have complicated dynamics
    • It is a very natural way to model sequential data
      • They are equivalent to very deep nets with one hidden layer per time slice
      • They use the same weights at every time slice and they get input at every time slice.
    • They have the ability to remember information in their hidden state

      

     

  • Symmetrically connected networks
    • They are like recurrent networks but the connections between units are symmetrical(same weights in both directions)

 

4. Perceptrons

  • Add an extra component with value 1 to each input vector. The “bias” weight on this component is minus the threshold. Now we can forget the threshold.
  • Pick training cases using any policy that ensures that every training case willkeep getting picked.
    • If the output unit is correct, leave its weights alone
    • If the output unit incorrectly outputs a 1, subtract the input vector from the weight vector
    • If the output unit incorrectly outputs a zero, add the input vector to the weight vector.
  •  This is guaranteed to find a set of weights that gets the right answer for all the
    training cases if any such set exists.

 

5. The limitations of Perceptrons

  • once the hand-coded features have been determined, there are very strong limitations on what a perceptron can learn
  • the part of a Perceptron that learns cannot learn to do this if the transformations form a group

 

转载于:https://www.cnblogs.com/climberclimb/p/7096778.html

您可能感兴趣的与本文相关的镜像

Qwen-Image-Edit-2509

Qwen-Image-Edit-2509

图片编辑
Qwen

Qwen-Image-Edit-2509 是阿里巴巴通义千问团队于2025年9月发布的最新图像编辑AI模型,主要支持多图编辑,包括“人物+人物”、“人物+商品”等组合玩法

六自由度机械臂ANN人工神经网络设计:正向逆向运动学求解、正向动力学控制、拉格朗日-欧拉法推导逆向动力学方程(Matlab代码实现)内容概要:本文档围绕六自由度机械臂的ANN人工神经网络设计展开,详细介绍了正向与逆向运动学求解、正向动力学控制以及基于拉格朗日-欧拉法推导逆向动力学方程的理论与Matlab代码实现过程。文档还涵盖了PINN物理信息神经网络在微分方程求解、主动噪声控制、天线分析、电动汽车调度、储能优化等多个工程与科研领域的应用案例,并提供了丰富的Matlab/Simulink仿真资源和技术支持方向,体现了其在多学科交叉仿真与优化中的综合性价值。; 适合人群:具备一定Matlab编程基础,从事机器人控制、自动化、智能制造、电力系统或相关工程领域研究的科研人员、研究生及工程师。; 使用场景及目标:①掌握六自由度机械臂的运动学与动力学建模方法;②学习人工神经网络在复杂非线性系统控制中的应用;③借助Matlab实现动力学方程推导与仿真验证;④拓展至路径规划、优化调度、信号处理等相关课题的研究与复现。; 阅读建议:建议按目录顺序系统学习,重点关注机械臂建模与神经网络控制部分的代码实现,结合提供的网盘资源进行实践操作,并参考文中列举的优化算法与仿真方法拓展自身研究思路。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值