Neural Network Week 1 & Week 2

部署运行你感兴趣的模型镜像

1. Different types of neurons

  • Linear neurons
  • Binary threshold neurons
  • Recitified linear neurons
  • sigmoid neurons
  • Stochastic binary neurons

 

2. Reinforcement learning

Learn to select an action to maximize payoff.

– The goal in selecting each action is to maximize the expected sum
of the future rewards.
– We usually use a discount factor for delayed rewards so that we
don’t have to look too far into the future.

Reinforcement learning is difficult:

– The rewards are typically delayed so its hard to know where we
went wrong (or right).
– A scalar reward does not supply much information.

 

3. Main types of neural networks architecture

  • Feed-forward  
    • The first layer is the input and the last layer is the output
    • They compute a series of transformations that change the similarities between cases
    • The activities of the neurons in each layer are a non-linear function of the activities in the layer below

  

  • Recurrent
    • They have directed cycles in their connection graph
    • They have complicated dynamics
    • It is a very natural way to model sequential data
      • They are equivalent to very deep nets with one hidden layer per time slice
      • They use the same weights at every time slice and they get input at every time slice.
    • They have the ability to remember information in their hidden state

      

     

  • Symmetrically connected networks
    • They are like recurrent networks but the connections between units are symmetrical(same weights in both directions)

 

4. Perceptrons

  • Add an extra component with value 1 to each input vector. The “bias” weight on this component is minus the threshold. Now we can forget the threshold.
  • Pick training cases using any policy that ensures that every training case willkeep getting picked.
    • If the output unit is correct, leave its weights alone
    • If the output unit incorrectly outputs a 1, subtract the input vector from the weight vector
    • If the output unit incorrectly outputs a zero, add the input vector to the weight vector.
  •  This is guaranteed to find a set of weights that gets the right answer for all the
    training cases if any such set exists.

 

5. The limitations of Perceptrons

  • once the hand-coded features have been determined, there are very strong limitations on what a perceptron can learn
  • the part of a Perceptron that learns cannot learn to do this if the transformations form a group

 

转载于:https://www.cnblogs.com/climberclimb/p/7096778.html

您可能感兴趣的与本文相关的镜像

Langchain-Chatchat

Langchain-Chatchat

AI应用
Langchain

Langchain-Chatchat 是一个基于 ChatGLM 等大语言模型和 Langchain 应用框架实现的开源项目,旨在构建一个可以离线部署的本地知识库问答系统。它通过检索增强生成 (RAG) 的方法,让用户能够以自然语言与本地文件、数据库或搜索引擎进行交互,并支持多种大模型和向量数据库的集成,以及提供 WebUI 和 API 服务

内容概要:本文档介绍了基于3D FDTD(时域有限差分)方法在MATLAB平台上对微带线馈电的矩形天线进行仿真分析的技术方案,重点在于模拟超MATLAB基于3D FDTD的微带线馈矩形天线分析[用于模拟超宽带脉冲通过线馈矩形天线的传播,以计算微带结构的回波损耗参数]宽带脉冲信号通过天线结构的传播过程,并计算微带结构的回波损耗参数(S11),以评估天线的匹配性能和辐射特性。该方法通过建立三维电磁场模型,精确求解麦克斯韦方程组,适用于高频电磁仿真,能够有效分析天线在宽频带内的响应特性。文档还提及该资源属于一个涵盖多个科研方向的综合性MATLAB仿真资源包,涉及通信、信号处理、电力系统、机器学习等多个领域。; 适合人群:具备电磁场与微波技术基础知识,熟悉MATLAB编程及数值仿真的高校研究生、科研人员及通信工程领域技术人员。; 使用场景及目标:① 掌握3D FDTD方法在天线仿真中的具体实现流程;② 分析微带天线的回波损耗特性,优化天线设计参数以提升宽带匹配性能;③ 学习复杂电磁问题的数值建模与仿真技巧,拓展在射频与无线通信领域的研究能力。; 阅读建议:建议读者结合电磁理论基础,仔细理解FDTD算法的离散化过程和边界条件设置,运行并调试提供的MATLAB代码,通过调整天线几何尺寸和材料参数观察回波损耗曲线的变化,从而深入掌握仿真原理与工程应用方法。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值