从0开始实现一个神经网络(MLP)

本博客为博主学习Andrej Karpathy中Neural Networks:from zero to hero系列中做的笔记,在为自己梳理思路的同时希望也能为大家提供一些帮助。

目录

怎么理解Value(附)

Value类的搭建

前向传播部分

计算图可视化

反向传播部分

Value类的完善

value类的完整代码

Neuron类的搭建

Layer类的搭建

MLP类的搭建

Neuron,Layer,MLP类的完整代码

MLP的训练

训练一次的过程

模型训练的完整代码


 

视频源地址:(96) The spelled-out intro to neural networks and backpropagation: building micrograd - YouTube https://www.youtube.com/watch?v=VMj-3S1tku0

GitHub源代码地址:karpathy/micrograd: A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API (github.com) https://github.com/karpathy/micrograd

在所有步骤之前,我们要做是确定方法论:知道我们到底要干什么,确定目标-->再逐级分解目标-->找到目标的最小不可分元素-->实现该元素-->用该元素为砖块逐级往上搭直到完成小目标-->再次确定目标  以此类推。

现在我们可以根据方法论开始构建MLP了

很显然我们要构建一个MLP(多层感知机),如图所示,图中所示的为两层(4-->5,5-->1)(新同学可能会搞混,我们现在把两列圆圈中间的线确定为一层,这样就方便理解和之后的构建了)

如果要构建一个如图所示的MLP,我们就要首先构建两层(4-->5,5-->1),要构建layer,就要构建圆圈(neuron)。但是在实际的前向传播和反向传播,我们还要构建算子,用于最基本的计算(即neuron里进行的w1*x1+w2*x2...+b的运算)(在Andrej Karpathy的视频里是构造一个Value类用于对标torch里的tensor)。

好的现在我们已经确定好目标MLP-->Layer-->Neuron-->Value

怎么理解Value(附)

在生活中我们想当然地进行基本的数学加减乘除运算,其实忽略了数字和运算符是我们建构的用于计算的局部最小元素,对于矩阵乘法,我们建构向量,矩阵进行计算,对于积分,我们建构极限,对于形容两个变量的映射关系,我们建构函数。这样答案就显而易见了,在描述或建构某个高级的动作或规律时,我们一般会先建构其最小单位来实现它。而在前向传播的加和乘积运算和反向传播的求梯度(求导)中,Value 类是整个自动微分和神经网络训练的核心或者说"最小单位",MLP 的每个参数和中间结果都需要用它来追踪和管理梯度。没有它,MLP 就无法自动求导和训练。

Value类的搭建

创建一个Value类,作为计算的基本组件(用于实现前向传播和反向传播):

前向传播部分

在创建Value类实现前向传播之前,我们想到用链表来实现Value元素(因为这样就可以支持链式运算,在MLP中链式运算是必要的),初始化其中包含着data(用于存储数据),grad(用于存储梯度,方便之后用梯度下降算法来实现数据更新),label(用于标记节点);

class Value:
    def __init__(self, data, label = ""):
        self.data = data
        self.grad = 0
        self.label = label

这样只实现了小的节点而不能把每个节点连在一起,为了实现该目标,同时因为链式计算 这个必要条件,我们在value类中添加两个运算函数,这里用的是魔法方法(special methods)来实现,使其自动触发(即在计算a+b时,自动触发__add__方法)。其中每个函数的第一行代码(other = other if isinstance(other, Value) else Value(other))是用来确定另一个对象也是Value类型。

    def __add__(self, other):
        other = other if isinstance(other, Value) else Value(other)
        out = Value(self.data + other.data, (self, other), "+")

        return out
    
    def __mul__(self, other):
        other = other if isinstance(other, Value) else Value(other)
        out = Value(self.data * other.data, (self, other), "*")
 
        return out

同时补充Value类,添加计算后结果的前驱,用_children()来表示,同时添加_op来标记运算符,这里注意每次初始化前驱_prev时,用set()函数创建单独的指针,如果没有则任何没有前驱的节点都会指向同一个空集前驱。

class Value:
    def __init__(self, data, _children = (), _op = "", label = ""):
        self.data = data
        self.grad = 0
        self._prev = set(_children)#这里每创建一个Value,都会给它单独创建一个prev,指针指向不一样的内存地址
        self._op = _op
        self.label = label

现在我们就可以进行简单的加乘运算了

w1 = Value(2.0, label='w1')    
w2 = Value(3.0, label='w2')
x1 = Value(5.0, label='x1')    
x2 = Value(6.0, label='x2')
b = Value(10.0, label='b')

m = w1*x1; m.label = 'm'
n = w2*x2; n.label = 'n'
e = m + n; e.label = 'e' 
print(e)
draw_dot(e)

# Value(data=28.0, grad=0)

计算图可视化

按照原视频,添加以下代码块,可以实现计算图可视化(即上图的draw_dot方法),在添加之前下载官方推荐的 Windows 安装包(https://graphviz.org/download/),比如 graphviz-12.0.0 (64-bit) EXE installer,在安装时选择“添加到系统 PATH”(安装向导会有选项)。安装完成后,重启 VS Code 和终端,再用 !dot -V 或 dot -V 测试是否配置成功。之后cmd安装相关库即可

pip install graphviz
import os
os.environ["PATH"] += os.pathsep + r"这里改成你的Graphviz\bin的绝对路径"
from graphviz import Digraph

def trace(root):
    """
    构建图中所有节点和边的集合。
    """
    nodes, edges = set(), set()
    def build(v):
        if v not in nodes:
            nodes.add(v)
            for child in v._prev:
                edges.add((child, v))
                build(child)
    build(root)
    return nodes, edges

def draw_dot(root):
    """
    使用 graphviz 绘制计算图。
    """
    from graphviz import Digraph
    dot = Digraph(format='svg', graph_attr={'rankdir': 'LR'})  # LR = 从左到右

    nodes, edges = trace(root)
    for n in nodes:
        uid = str(id(n))
        # 为每个节点创建一个带有label、data、grad的矩形节点
        dot.node(name=uid, label="{ %s | data %.4f | grad %.4f }" % (getattr(n, 'label', ''), n.data, n.grad), shape='record')
        if n._op:
            # 如果此值是某个运算的结果,则创建一个运算节点
            dot.node(name=uid + n._op, label=n._op)
            # 并将此节点连接到它
            dot.edge(uid + n._op, uid)

    for n1, n2 in edges:
        # 连接 n1 到 n2 的运算节点
        dot.edge(str(id(n1)), str(id(n2)) + n2._op)

    return dot

反向传播部分

目前为止已经通过value类基本实现前向传播的过程了,现在开始实现反向传播(求梯度),我们知道求梯度直觉本质是求导得到自变量对于因变量的增长程度,即得到每个值对于最终结果的贡献程度,例如y=4a+b,a每增加1,y就增长4,所以y对于a的导数为4,b每增长1,y即增长1,所以y对于b的导数为1

因为对节点进行反向传播就是得到各个前驱的梯度(导数),而每种计算的求导方法各不相同,因此,我们在__init__部分用lambda:none设置默认反向传播(求导)函数,然后在每个运算函数里覆盖这个函数再调用即可

self._backward = lambda: None

因为求导法则,对于加法,前驱的梯度为1

def _backward():
    self.grad = 1
    other.grad = 1

对于乘法,前驱的梯度为对方的值(替换即可)

def _backward():
    self.grad = other.data
    other.grad = self.data

注意:1.链式求导法则,每个梯度都要乘以后继的梯度。2.用+=来处理一个数重复使用的情况。3.每写完一个反向传播(求导)方法,都要重新定义_backward来覆盖默认函数

以下为完整代码

class Value:
    def __init__(self, data, _children = (), _op = "", label = ""):
        self.data = data
        self.grad = 0
        self._prev = set(_children)#这里每创建一个Value,都会给它单独创建一个prev,指针指向不一样的内存地址
        self._op = _op
        self._backward = lambda: None
        self.label = label
    
    def __add__(self, other):
        other = other if isinstance(other, Value) else Value(other)
        out = Value(self.data + other.data, (self, other), "+")

        def _backward():
            self.grad += out.grad #用+=来处理一个数重复使用的情况
            other.grad += out.grad #这里省略了1*out.grad,直接out.grad
        out._backward = _backward #重新定义_backward,即计算梯度
        return out
    
    def __mul__(self, other):
        other = other if isinstance(other, Value) else Value(other)
        out = Value(self.data * other.data, (self, other), "*")

        def _backward():
            self.grad += other.data * out.grad
            other.grad += self.data * out.grad
        out._backward = _backward    
        return out

这样我们就实现反向传播了!!!执行反向传播时记得把根节点初始化为1

w1 = Value(2.0, label='w1')    
w2 = Value(3.0, label='w2')
x1 = Value(5.0, label='x1')    
x2 = Value(6.0, label='x2')
b = Value(10.0, label='b')

m = w1*x1; m.label = 'm'
n = w2*x2; n.label = 'n'
e = m + n; e.label = 'e' 
print(e)
e.grad = 1
e._backward()
n._backward()
m._backward()

draw_dot(e)

#Value(data=28.0, grad=0)

虽然实现了反向传播,但是得一个结点一个结点进行_backward,我们想要一个从后往前顺序的列表,根据这个列表逐个遍历进行backward操作,因此我们可以构建拓扑图来进行自动反向传播

topo = []
visited = set()
def bulid_topo(r):
    if r not in visited:
        visited.add(r)
        for i in r._prev:
            bulid_topo(i)
        topo.append(r)
bulid_topo(e)
print(topo)

# [Value(data=2.0, grad=5.0), Value(data=5.0, grad=2.0), Value(data=10.0, grad=1), Value(data=6.0, grad=3.0), Value(data=3.0, grad=6.0), Value(data=18.0, grad=1), Value(data=28.0, grad=1)]

 这样就可以得到一个列表,之后只用遍历这个列表即可(记得先reversed)

e.grad = 1
for node in reversed(topo):
    node._backward()

再把这个整合为一个函数,放在value类里

    def backward(self):
        topo = []
        visited = set()
        def bulid_topo(r):
            if r not in visited:
                visited.add(r)
                for i in r._prev:
                    bulid_topo(i)
                topo.append(r)
        bulid_topo(self)
        self.grad = 1
        for node in reversed(topo):
            node._backward()

Value类的完善

接下来我们来逐渐完善Value类:

首先按照相关范式来添加两个激活函数

    def tanh(self):
        t = self.data
        out = Value((math.exp(2*t) - 1) / (math.exp(2*t) + 1), (self,), 'tanh')

        def _backward():
            p = 1 - out.data**2
            self.grad = p * out.grad
        out._backward = _backward
        return out

    def relu(self):
        out = Value(0 if self.data < 0 else self.data, (self,) , 'ReLU')
        
        def _backward():
            self.grad += (out.data > 0) * out.grad #true为1,false为0
        out._backward = _backward
        return out

 接下来完善bug,由于每次进行计算时只能执行value + other(例如Value(3)+4 =7),而不能实现other + value(例如4 + Value(3)-->报错)的运算,我们添加以下函数来解决这个bug

    def __radd__(self, other):
        return self + other
    
    def __rmul__(self, other):
        return self * other

 __radd__ 是“反向加法”方法。当你写 2 + Value(3) 时,Python 会先尝试调用 int 的 __add__,但 int 不知道怎么和 Value 相加,于是会自动调用 Value 的 __radd__ 方法。__rmul__同理。

value类的完整代码

以此类推添加__pow__,__truediv__等函数,以下为Value类的完整代码



class Value:
    def __init__(self, data, _children = (), _op = "", label = ""):
        self.data = data
        self.grad = 0
        self._prev = set(_children)#这里每创建一个Value,都会给它单独创建一个prev,指针指向不一样的内存地址
        self._op = _op
        self._backward = lambda: None
        self.label = label
    
    def __add__(self, other):
        other = other if isinstance(other, Value) else Value(other)
        out = Value(self.data + other.data, (self, other), "+")

        def _backward():
            self.grad += out.grad #用+=来处理一个数重复使用
            other.grad += out.grad
        out._backward = _backward #重新定义_backward,即计算梯度
        return out
    
    def __mul__(self, other):
        other = other if isinstance(other, Value) else Value(other)
        out = Value(self.data * other.data, (self, other), "*")

        def _backward():
            self.grad += other.data * out.grad
            other.grad += self.data * out.grad
        out._backward = _backward    
        return out
    
    def __pow__(self, other):
        assert isinstance(other, (int, float)), '幂运算只能是int,float'
        out = Value(self.data ** other, (self,), f"**{other}")

        def _backward():
            self.grad += (other * self.data ** (other - 1)) * out.grad
        out._backward = _backward
        return out

    def tanh(self):
        t = self.data
        out = Value((math.exp(2*t) - 1) / (math.exp(2*t) + 1), (self,), 'tanh')

        def _backward():
            p = 1 - out.data**2
            self.grad = p * out.grad
        out._backward = _backward
        return out
    
    def relu(self):
        out = Value(0 if self.data < 0 else self.data, (self,) , 'ReLu')
        
        def _backward():
            self.grad += (out.data > 0) * out.grad #true为1,false为0
        out._backward = _backward
        return out

    def exp(self):
        import math
        out = Value(math.exp(self.data), (self,), f'e*{self.data}')

        def _backward():
            self.grad += out.data * out.grad 
        out._backward = _backward
        return out

    def backward(self):#构建拓扑图
        topo = []
        visited = set()
        def bulid_topo(r):
            if r not in visited:
                visited.add(r)
                for i in r._prev:
                    bulid_topo(i)
                topo.append(r)
        bulid_topo(self)
        self.grad = 1
        for node in reversed(topo):
            node._backward()
    
    def __radd__(self, other):
        return self + other
    
    def __rmul__(self, other):
        return self * other
    
    def __sub__(self, other):
        return self + (-other)
    
    def __rsub__(self, other):
        return other + (-self)
    
    def __neg__(self):
        return self * -1
    
    def __truediv__(self, other): #定义真除法
        return self * other ** -1
    
    def __rtruediv__(self, other):
        return other * self ** -1
    
    def __repr__(self):
        return f"Value(data={self.data}, grad={self.grad})"

Neuron类的搭建

现在我们要真正开始MLP的旅程了,我们的neuron神经元接受一个向量x,经过w1*x1+w2*x2...+b的运算返回一个值(Value类型),所以__init__中应该有nin来存储向量x的维度大小,w的列表来存储权重w,b来存储权重b

class Neuron:
    def __init__(self, nin):
        self.nin = nin
        self.w = [Value(random.uniform(-1,1)) for _ in range(nin)]
        self.b = Value(random.uniform(-1,1))

 接下来用__call__来实现计算过程(接收一个x,返回一个value类型的值)        

    def __call__(self, x):
        out = sum([w*x for w,x in zip(self.w,x)],self.b)
        return out.tanh()

Layer类的搭建

同理可搭建layer类,但是不同的是要添加一个nout来存储输出的个数(多少神经元,输出就是多少维度),neurons列表来实际存储Neuron对象

class Layer:
    def __init__(self, nin , nout):
        self.nin = nin
        self.nout = nout
        self.neurons = [Neuron(nin) for _ in range(nout)]

 同理用__call__来实现计算过程(接收一个x,返回一个元素为value类型的列表,但是如果输出为一个值是,返回一个value类型值)

    def __call__(self, x):
        out = [n(x) for n in self.neurons]
        return out[0] if len(out)==1 else out

MLP类的搭建

 在搭建MLP类时,我们在neuron类的基础上,添加一个s的列表来存储MLP的结构(例如[5,1],代表MLP为一个两层第一层为5个神经元,第二层为1个神经元的结构)

class MLP:
    def __init__(self, nin , s):
        assert isinstance(s, list), "MLP第二个参数为list"
        self.nin = nin
        self.s = s
        stru = [nin] + s
        self.layers = [Layer(stru[i], stru[i+1]) for i in range(len(s))]

用__call__来实现计算过程(接收一个x,经过每一层的计算再赋值到x,以此类推)

    def __call__(self, x):
        for layer in self.layers:
            x = layer(x)
        return x

可以对于每个类后添加__repr__来定义对象的“官方”字符串表示。现在来测试各个类是否成功定义完成

Neuron(3)

# Neuron(nin=3,w=[Value(data=0.4124726384279416, grad=0), Value(data=-0.18835672029620554, grad=0), Value(data=-0.9790898870463209, grad=0)],b=Value(data=-0.057245583840508596, grad=0))

Layer(5,1)

# Layer(nin=5,nout=1,parameters=[Value(data=0.7841258321033981, grad=0), Value(data=0.518257165154357, grad=0), Value(data=-0.5988302036206061, grad=0), Value(data=0.027552880162802662, grad=0), Value(data=-0.6547561577496053, grad=0), Value(data=0.816683838551697, grad=0)])

MLP(4,[5,1])

# MLP(nin=4,structure=[5, 1],parameters=[Value(data=0.797556558513415, grad=0), Value(data=-0.6776131979981947, grad=0), Value(data=-0.19622573580288338, grad=0), Value(data=-0.6880446978687424, grad=0), Value(data=-0.40161913438110797, grad=0), Value(data=-0.24002310126593063, grad=0), Value(data=-0.2160221996442353, grad=0), Value(data=0.9573888465887856, grad=0), Value(data=0.2606741538617081, grad=0), Value(data=0.8045135030583179, grad=0), Value(data=-0.19909187808655804, grad=0), Value(data=-0.0692682131000335, grad=0), Value(data=-0.821807851650239, grad=0), Value(data=0.1735643565956777, grad=0), Value(data=0.20739630634517203, grad=0), Value(data=0.678546136749979, grad=0), Value(data=-0.6983513841517341, grad=0), Value(data=-0.6689455856865705, grad=0), Value(data=0.1729002990535169, grad=0), Value(data=-0.2339439022787302, grad=0), Value(data=-0.9837710640885986, grad=0), Value(data=0.14524646950113662, grad=0), Value(data=0.37830018975142354, grad=0), Value(data=0.35220073461450174, grad=0), Value(data=0.14104469516968798, grad=0), Value(data=0.11283932820513454, grad=0), Value(data=-0.7299008270043461, grad=0), Value(data=-0.7968116699269623, grad=0), Value(data=-0.3416750614234425, grad=0), Value(data=-0.4994492809173834, grad=0), Value(data=0.7154200572840859, grad=0)])

Neuron,Layer,MLP类的完整代码

同时在每个类中定义了parm函数来返回所有参数

import random
class Neuron:
    def __init__(self, nin):
        self.nin = nin
        self.w = [Value(random.uniform(-1,1)) for _ in range(nin)]
        self.b = Value(random.uniform(-1,1))

    def __call__(self, x):
        out = sum([w*x for w,x in zip(self.w,x)],self.b)
        return out.tanh()

    def parm(self):
        return self.w  + [self.b]
    
    def __repr__(self):
        return f"Neuron(nin={self.nin},w={self.w},b={self.b})"

class Layer:
    def __init__(self, nin , nout):
        self.nin = nin
        self.nout = nout
        self.neurons = [Neuron(nin) for _ in range(nout)]
    
    def __call__(self, x):
        out = [n(x) for n in self.neurons]
        return out[0] if len(out)==1 else out
    
    def parm(self):
        return [i for n in self.neurons for i in n.parm()]
    
    def __repr__(self):
        return f"Layer(nin={self.nin},nout={self.nout},parameters={self.parm()})"

class MLP:
    def __init__(self, nin , s):
        assert isinstance(s, list), "MLP第二个参数为list"
        self.nin = nin
        self.s = s
        stru = [nin] + s
        self.layers = [Layer(stru[i], stru[i+1]) for i in range(len(s))]
    
    def __call__(self, x):
        for layer in self.layers:
            x = layer(x)
        return x
    
    def parm(self):
        return [i for l in self.layers for i in l.parm()]
    
    def __repr__(self):
        return f"MLP(nin={self.nin},structure={self.s},parameters={self.parm()})"

MLP的训练

训练一次的过程

建立一个MLP并实例化

mlp = MLP(4, [5,1])

创建一个简单的训练集

# 训练样本
train_data = [
    ([0.5, -1.2, 3.3, 1.2], 1),
    ([2.1, 0.0, -0.7, 3.1], -1),
    ([-1.5, 2.2, 0.8, 0.3], 1),
    ([0.0, -0.3, -2.2, -2.1], -1),
    ([1.7, 1.1, -0.9, 1.1], 1),
]

通过mlp(x)进行整个MLP的前向传播,得到y_p(预测值),用均方误差损失(MSE)求得loss

count = 0
losses = 0
for x,y in zip([i[0] for i in train_data], [j[1] for j in train_data]):
    y_p = mlp(x)
    los = (y-y_p)**2
    losses += los
    count+=1
    print(f"样本{count}:\ny:{y}\ny_p:{y_p.data}\nloss:{los.data}\n")
batch = 5
loss = losses / batch ; loss.label="loss"
print(loss.data)

'''
样本1:
y:1
y_p:-0.8165779846227355
loss:3.2999555742159994

样本2:
y:-1
y_p:0.7382711293259453
loss:3.021586519048097

样本3:
y:1
y_p:-0.9637091809925296
loss:3.8561537475143512

样本4:
y:-1
y_p:-0.9302142252196743
loss:0.004870054361690337

样本5:
y:1
y_p:-0.65456196905066
loss:2.737575309428797

2.584028240913787
'''

这时候损失loss很大为2.58,用loss进行反向传播算出各个参数的梯度

loss.backward()
print(mlp.parm())

'''
[Value(data=-0.3894505635865113, grad=-0.025643884940987027), Value(data=-0.6650010692141846, grad=-0.0375753581351412), Value(data=0.07484135592200758, grad=-0.2062084661424626), Value(data=0.9891703026974086, grad=0.030573172965406497), Value(data=-0.8901781083639004, grad=-0.09987439081641204), Value(data=0.2948109679135411, grad=-0.00070040043181512), Value(data=-0.679868944372743, grad=-0.009283393798267349), Value(data=0.7190229151719101, grad=-0.004663719892993183), Value(data=-0.544241038127312, grad=0.0053820916844830305), Value(data=-0.31435362070440886, grad=-0.004463964237787983), Value(data=-0.34772819422761003, grad=0.0030064256597981676), Value(data=-0.16881915927297508, grad=0.1356162529487852), Value(data=0.6184557965566015, grad=-0.0027907546854506957), Value(data=-0.14190474515881402, grad=-0.087501071780155), Value(data=0.41752399333886725, grad=0.05016646880040755), Value(data=-0.7261270757523548, grad=0.001773479645561243), Value(data=-0.5490890658227248, grad=-0.04124956111307208), Value(data=-0.6384105677180021, grad=-0.02125040731464126), Value(data=-0.14897869043673406, grad=0.022349571301584964), Value(data=0.6534070130050007, grad=-0.024747238906898037), Value(data=-0.31634193435450886, grad=-0.22709322478299027), Value(data=0.062349630571732595, grad=-0.14936886798724194), Value(data=-0.4728656462670071, grad=-0.16498819997719993), Value(data=0.8125671052329586, grad=-0.24087370544943867), Value(data=-0.0941618765578025, grad=-0.255498822192085), Value(data=0.8644090640717774, grad=0.443535688541981), Value(data=0.20543890685864352, grad=-0.13025245428635288), Value(data=-0.6913565776395298, grad=-0.24593786755656277), Value(data=0.14054950575932157, grad=0.2368868116795502), Value(data=0.6955353211824655, grad=0.2172043859617691), Value(data=-0.7515144635575346, grad=-0.35626137151113546)]
'''

之后根据各个参数的梯度进行更新(调整学习率为0.05)

注意:更新完参数要对梯度进行zero化或者初始化

for i in mlp.parm():
        i.data += -0.05 * i.grad
        i.grad = 0.0

以上完成了一次训练,训练后损失从2.58-->2.49

'''
样本1:
y:1
y_p:-0.7668767719938279
loss:3.121853527411329

样本2:
y:-1
y_p:0.7193135291001762
loss:2.9560390113469026

样本3:
y:1
y_p:-0.9541861497628663
loss:3.8188435079250156

样本4:
y:-1
y_p:-0.9255815494659797
loss:0.005538105779884426

样本5:
y:1
y_p:-0.6059758453707824
loss:2.5791584159143994

2.496286513675506
'''

模型训练的完整代码

通过for循环进行上百次的迭代训练

for j in range(100):
    count = 0
    losses = 0
    for x,y in zip([i[0] for i in train_data], [j[1] for j in train_data]):
        y_p = mlp(x)
        los = (y-y_p)**2
        losses += los
        count+=1
        # print(f"样本{count+1}:\n y:{y}, y_p:{y_p.data}, loss:{los.data}")
    batch = 5
    loss = losses / batch ; loss.label="loss"
    print(loss.data)
    loss.backward()
    for i in mlp.parm():
        i.data += -0.01 * i.grad
        i.grad = 0.0
   
'''
0.08339710595347599
0.08285985650720021
0.08232838007261344
0.08180259670264946
0.08128242762639816
0.08076779523745936
0.08025862308196441
0.07975483584630084
0.07925635934457331
0.07876312050583255
0.07827504736110136
0.07779206903022615
0.07731411570857949
0.07684111865363855
0.07637301017146181
0.07590972360308607
0.07545119331086315
0.07499735466475604
0.0745481440286102
0.07410349874641824
0.07366335712859172
0.07322765843825439
0.07279634287757052
0.07236935157411954
0.07194662656732889
0.07152811079497433
0.07111374807975922
0.07070348311597861
0.07029726145627879
0.06989502949851836
0.06949673447273741
0.06910232442824148
0.06871174822080549
0.06832495550000232
0.06794189669666144
0.06756252301046058
0.0671867863976547
0.06681463955894502
0.06644603592749079
0.06608092965706745
0.06571927561037089
0.06536102934747197
0.06500614711442103
0.06465458583200481
0.0643063030846557
0.06396125710951449
0.06361940678564731
0.0632807116234165
0.06294513175400539
0.06261262791909776
0.06228316146071079
0.06195669431118124
0.06163318898330458
0.06131260856062627
0.06099491668788393
0.06068007756160004
0.06036805592082406
0.06005881703802221
0.059752326710114614
0.059448551249657804
0.05914745747617131
0.05884901270760762
0.05855318475196293
0.05825994189902797
0.057969252912277086
0.05768108702089386
0.05739541391193181
0.05711220372260825
0.05683142703273001
0.056553054857248666
0.056277058638944326
0.056003410241235445
0.055732081941113426
0.05546304642220012
0.055196276767926114
0.054931746454828484
0.05466942934596617
0.05440929968445048
0.05415133208709033
0.053895501538148705
0.05364178338321024
0.05339015332315675
0.05314058740825043
0.05289306203232111
0.05264755392705758
0.05240404015640057
0.05216249811103529
0.05192290550298262
0.05168524036028641
0.051449481021796355
0.05121560613204355
0.05098359463620843
0.05075342577517782
0.05052507908069162
0.05029853437057535
0.050073771744059026
0.04985077157717949
0.04962951451826483
0.049409981483500315
0.049192153652573636
'''

此时y_p预测值逐渐趋于y真实值,模型训练完毕!

'''
样本1:
y:1
y_p:0.7765705296206408
loss:0.04992072823400096

样本2:
y:-1
y_p:-0.7016642140460425
loss:0.08900424118076554

样本3:
y:1
y_p:0.8929919541838711
loss:0.011450721869386738

样本4:
y:-1
y_p:-0.8552019219846224
loss:0.020966483396947372

样本5:
y:1
y_p:0.7288212994335876
loss:0.07353788764088794

0.048976012464397714
'''

 

 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值