简单多层感知神经网络 输出_神经网络关于感知器的简要介绍和注意事项

简单多层感知神经网络 输出

Neural Networks represent the perfect candidate for simulating how biological organisms, i.e. Humans, learn. Language comprehension, object and image detection, and also problem-solving, are only a small fraction of the enormous skills which humans can perform.

神经网络是模拟生物体(例如人类)如何学习的理想人选。 语言理解,对象和图像检测以及解决问题的能力只是人类可以执行的巨大技能的一小部分。

Neural Networks try to simulate artificially all the tasks (and not only) which I have mentioned previously. In such a way, computers, and also other devices in general, are able to perform incredible tasks.

神经网络尝试人为地模拟我之前提到的所有任务(不仅是)。 这样,计算机以及通常的其他设备都能够执行令人难以置信的任务。

For a better understanding of the lecture, let’s first compare biological neural networks and artificial neural networks. In particular, for a biological organism, we know that is made up of:

为了更好地理解本讲座,让我们首先比较一下生物神经网络和人工神经网络。 特别是,对于一个生物有机体,我们知道它由以下组成:

  • Neurons, which are cells of the nervous system;

    神经元,是神经系统的细胞;
  • Axons and dendrites, which connect neurons to each other;

    轴突和树突,将神经元彼此连接;
  • Synapses, which connect regions between axons and dendrites [Fig.1].

    突触连接轴突和树突之间的区域[图1]。

On the other hand, artificial neural networks, are described by the following schema [Fig.2].

另一方面,人工神经网络由以下示意图描述[图2]。

Image for post
Fig.2 — Basic schema of an Artificial Neural Network.
图2-人工神经网络的基本架构。

Let’s examine it:

让我们检查一下:

  • Neurons are, in general, the computation units of NNs;

    一般而言,神经元是NN的计算单位。
  • Also, neurons are linked to each other by some ‘weights’ which have in synaptic the biological equivalent.

    而且,神经元通过突触具有生物学等效性的一些“权重”彼此连接。

In an Artificial NN, we have to distinguish between two classes of neurons, in fact, as you can see from the above image, there are present:

在人工神经网络中,我们必须区分两类神经元,实际上,如您从上图可以看到的那样:

  • As you can guess, Input neurons (on the left) only have the task of acting as injecting points into the neural network and therefore no computation is performed;

    如您所料,输入神经元(在左侧)仅具有充当将点注入神经网络的任务,因此不执行任何计算。
  • Computation neurons, which perform the real calculations. In the above image, the computation node is the one at the end of the Neural Network.

    计算神经元,执行实际的计算。 在上图中,计算节点是神经网络末端的节点。

Starting within the field of Deep Learning and approaching for the very first time to NNs, it’s a must to face on the architecture named ‘Perceptron’.

从深度学习领域开始,并首次接触NN,这是必须面对的名为“ Perceptron”的架构

Historically, the perceptron was thought firstly by Warren McCulloch (neurophysiologist) and Walter Pitts (logician), and later Frank Rosenblatt resumed the studies and implemented the first perceptron. Initially, these implementations were achieved using hardware circuits rather than the actual algorithms. The very first implementation was performed at the Mark I perceptron, a huge computing machine [Fig.3]

历史上,感知器最初是由沃伦·麦卡洛克( Warren McCulloch )想到的 (神经生理学家) 沃尔特·皮茨( Walter Pitts) (逻辑学家),后来是Frank Rosenblatt 恢复研究并实施了第一个感知器。 最初,这些实现是使用硬件电路而不是实际算法实现的。 最初的实现是在Mark I感知器上执行的, Mark I感知器是一台巨大的计算机[图3]

Image for post
Fig.3 — Mark I computing machine (1958). Hardware circuits were required to implement the perceptron algorithm. The machine had an array of 400 photocells, connected to certain neurons, and weights were encoded in potentiometers.
图3-Mark I计算机(1958年)。 需要硬件电路来实现感知器算法。 该机器有一个由400个光电管组成的阵列,连接到某些神经元,并且砝码用电位计编码。

Apart from the historical background, the perceptron was good in solving tasks such as simple binary classification where linear separability patterns were present. In more complex situations the machine, and therefore the algorithm, was not able to learn many classes of pattern where nonlinearity was involved.

除了历史背景之外,感知器还擅长解决诸如线性分离模式存在的简单二进制分类之类的任务。 在更复杂的情况下,机器以及算法无法学习涉及非线性的许多类模式。

For a better understanding of the discussion, let’s examine, how the perceptron internally works. The perceptron receives a set of input, and this input is then multiplied by a set of weights. At this moment a weighted sum is performed and the result passes through an activation function, which is, in the case of the perceptron the step function. Now, if the result of the weighted sum surpasses a certain threshold, the perceptron will either let the signal pass or not. Mathematically, we have the following:

为了更好地理解讨论,让我们研究一下感知器在内部如何工作。 感知器接收一组输入,然后将该输入乘以一组权重。 此时执行加权和,结果通过激活函数,在感知器的情况下是阶跃函数。 现在,如果加权和的结果超过某个阈值,则感知器将让信号通过或不让信号通过。 在数学上,我们具有以下内容:

Image for post

Remember that the above process happened before the step function. Now, the linear combination is passed at the step function which determines, as just said, whether the perceptron will let the signal pass. The step function can be represented with the following equation:

请记住,以上过程发生在step函数之前。 现在,线性组合在步长函数处传递,该步长函数确定了感知器是否会让信号通过。 步进函数可用以下公式表示:

Image for post
Image for post
Fig.4 — Architecture of the perceptron. The ‘phi’ greek letter represents the step activation function.
图4 –感知器的架构。 希腊字母“ phi”代表阶跃激活功能。

This is the end, for the moment. Next, I would like to present how to implement a simple Perceptron architecture with Python.

现在就到此结束。 接下来,我想介绍如何使用Python实现简单的Perceptron架构。

非常感谢您的到来 (Thank you for being here)

I really would appreciate any of your feedback, and also any suggestion. For more topics or more detailed content don’t hesitate to leave a comment or email me.

非常感谢您的任何反馈以及任何建议。 有关更多主题或更详细的内容,请随时发表评论或给我发送电子邮件。

翻译自: https://medium.com/@riccio.christian.21/neural-networks-brief-presentation-and-notes-on-the-perceptron-744de8d70c3c

简单多层感知神经网络 输出

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值