caffe训练日志各项含义

本文记录了使用Caffe框架训练LeNet网络的过程,包括训练初期设置、迭代过程中的损失变化、测试精度提升等内容。通过观察不同迭代次数下的网络表现,展示了训练进展及效果。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

https://blog.youkuaiyun.com/dataningwei/article/details/77446841


I0821 09:53:35.929999 10308 solver.cpp:60] Solver scaffolding done.
I0821 09:53:35.929999 10308 caffe.cpp:252] Starting Optimization     ####### 开始网络训练
I0821 09:53:35.929999 10308 solver.cpp:279] Solving LeNet
I0821 09:53:35.929999 10308 solver.cpp:280] Learning Rate Policy: multistep
I0821 09:53:35.930999 10308 solver.cpp:337] Iteration 0, Testing net (#0)                                                           #### Test(Iteration 0)
I0821 09:53:35.993999 10308 blocking_queue.cpp:50] Data layer prefetch queue empty
I0821 09:53:36.180999 10308 solver.cpp:404]     Test net output #0: accuracy = 0.1121                                           #### Test(Iteration 0)网络输出节点0,accuracy信息。(由网络定义决定)
I0821 09:53:36.180999 10308 solver.cpp:404]     Test net output #1: loss = 2.30972 (* 1 = 2.30972 loss)     #### Test(Iteration 0)网络输出节点1,loss信息。       (由网络定义决定)

I0821 09:53:36.190999 10308 solver.cpp:228] Iteration 0, loss = 2.2891                                                                      #### Tain(Iteration 0) 网络loss值
I0821 09:53:36.190999 10308 solver.cpp:244]     Train net output #0: loss = 2.2891 (* 1 = 2.2891 loss)    #### Tain(Iteration 0) 只有一个输出值
I0821 09:53:36.190999 10308 sgd_solver.cpp:106] Iteration 0, lr = 0.001                                                                     #### Tain(Iteration 0)

I0821 09:53:36.700999 10308 solver.cpp:228] Iteration 100, loss = 2.24716                                                                   #### Tain(Iteration 100)
I0821 09:53:36.700999 10308 solver.cpp:244]     Train net output #0: loss = 2.24716 (* 1 = 2.24716 loss)    #### Tain(Iteration 100)
I0821 09:53:36.700999 10308 sgd_solver.cpp:106] Iteration 100, lr = 0.001                                                                   #### Tain(Iteration 100)
I0821 09:53:37.225999 10308 solver.cpp:228] Iteration 200, loss = 2.08563
I0821 09:53:37.225999 10308 solver.cpp:244]     Train net output #0: loss = 2.08563 (* 1 = 2.08563 loss)
I0821 09:53:37.225999 10308 sgd_solver.cpp:106] Iteration 200, lr = 0.001
I0821 09:53:37.756000 10308 solver.cpp:228] Iteration 300, loss = 2.11631
I0821 09:53:37.756000 10308 solver.cpp:244]     Train net output #0: loss = 2.11631 (* 1 = 2.11631 loss)
I0821 09:53:37.756000 10308 sgd_solver.cpp:106] Iteration 300, lr = 0.001
I0821 09:53:38.286999 10308 solver.cpp:228] Iteration 400, loss = 1.89424
I0821 09:53:38.286999 10308 solver.cpp:244]     Train net output #0: loss = 1.89424 (* 1 = 1.89424 loss)
I0821 09:53:38.286999 10308 sgd_solver.cpp:106] Iteration 400, lr = 0.001
I0821 09:53:38.819999 10308 solver.cpp:337] Iteration 500, Testing net (#0)                                                             #### Test(Iteration 500)
I0821 09:53:39.069999 10308 solver.cpp:404]     Test net output #0: accuracy = 0.3232                                           #### Test(Iteration 500)
I0821 09:53:39.069999 10308 solver.cpp:404]     Test net output #1: loss = 1.87822 (* 1 = 1.87822 loss)     #### Test(Iteration 500)
I0821 09:53:39.072999 10308 solver.cpp:228] Iteration 500, loss = 1.94478
I0821 09:53:39.072999 10308 solver.cpp:244]     Train net output #0: loss = 1.94478 (* 1 = 1.94478 loss)
I0821 09:53:39.072999 10308 sgd_solver.cpp:106] Iteration 500, lr = 0.001

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值