http://computing.dcu.ie/~humphrys/Notes/Neural/sigmoid.html
“
sigmoid函数是一个良好的阈值函数,
连续,光滑
严格单调
关于(0,0.5)中心对称
对阈值函数
_ 1, x > /delta
f(x)= /
/
- 0, x < -/delta
的良好近似
其导数f'(x)=f(x)*[1-f(x)],可以节约计算时间
f(x) = 1/[1+e^(-x)].图形如上。
如果x = a*r.其中a为倾斜系数,当a足够小,这个图形可以无限制接近你这个阈值函数
最为详细的说明参考该url.
参见:
http://computing.dcu.ie/~humphrys/Notes/Neural/sigmoid.html
”转自“http://blog.youkuaiyun.com/pennyliang/article/details/1482654”
Continuous Output - The sigmoid function
Given Summed Input:
x =![]()
Instead of threshold, and fire/not fire,
we could have continuous output y according to the sigmoid function:
Note e and its properties.
As x goes to minus infinity, y goes to 0 (tends not to fire).
As x goes to infinity, y goes to 1 (tends to fire):
At x=0, y=1/2

More threshold-like
We can make this more and more threshold-like, or step-like, by increasing the weights on the links, and so increasing the summed input:

More linear
Q. How do we make it less step-like (more linear)?
For any non-zero w, no matter how close to 0, ς(wx) will eventually be asymptotic to the lines y=0 and y=1.

Is this linear? Let's change the scale:

This is exactly same function.
So it's not actually linear, but note that within the range -6 to 6 we can approximate a linear function with slope.
If x will always be within that range then for all practical purposes we have linear output with slope.
Or try this:

Is this linear? Let's change the scale:

This is exactly same function.
Approximation of Linear with slope
In practice, x will always be within some range.
So we can always get, within that range, an approximation of many different linear functions with slope.
e.g. Given x will be from -30 to 30:





Approximation of any linear function so long as y stays in [0,1]
And centred on zero. To centre other than zero see below.
Linear y=1/2
The only way we can make ς(wx) exactly linear is to set w=0, then y = constant 1/2 for all x.
Change sign
We can also, by changing the sign of the weights, make large positive actual input lead to large negative summed input and hence no fire, and large negative actual input lead to fire.

Not centred on zero
This is of course a threshold-like function still centred on zero. To centre it on any threshold we use:
y = ς(x-t)
where t is the threshold for this node. This threshold value is something that is learnt, along with the weights.

The "threshold" is now the centre point of the curve, rather than an all-or-nothing value.
ς(ax+b)
General case: use ς(ax+b)
Can we have linear output?
Can y be linear? Not if it has slope. Must stay between 0 and 1.
Can be linear constant y=c, c between 0 and 1. We already saw y=1/2. Can we have other y=c?
By setting a=0, y=ς(b) constant for all x
By varying b, we can have constant output y=c for any c between 0 and 1.
Reminder - differentiation rules
d/dx (fg) = f (dg/dx) + g (df/dx)
d/dx (f/g) = ( g (df/dx) - f (dg/dx) ) / g2
Properties of the sigmoid function

Max/min value of slope
Slope = y (1-y)
The slope is greatest where? And least where?
To prove this, take the next derivative and look for where it equals 0:
d/dy ( y (1-y) )
= y (-1) + (1-y) 1
= -y + 1 -y
= 1 - 2y
= 0 for y = 1/2
This is a maximum. There is no minimum.

Slope of ς(ax+b)
For the general case:
y = ς(ax+b)
a positive or negative, fraction or multiple
b positive or negative
y = ς(z) where z = ax+b
dy/dx = dy/dz dz/dx
= y(1-y) a
if a positive, all slopes are positive, steepest slope (highest positive slope) is at y = 1/2
if a negative, all slopes are negative, steepest slope (lowest negative slope) is at y = 1/2
i.e. Slope is different value, but still steepest at y = 1/2
本文详细解释了sigmoid函数的特性,包括其作为良好阈值函数的作用、连续性和光滑性,以及如何通过调整参数使其更接近阈值函数或保持线性特性。讨论了sigmoid函数在神经网络中连续输出的机制,以及如何通过改变权重来实现线性输出或更平滑的过渡。同时,介绍了如何通过调整函数的中心点来创建具有不同阈值的sigmoid函数。
889

被折叠的 条评论
为什么被折叠?



