11gR2 新特性之—In-Memory Parallel execution

原帖地址 http://www.killdb.com/?p=348

该特性是在oracle 11gR2 引入,其目的不言而喻。在11gR2之前,也就是在11gR1中,
如果当你发出/*+parallel(16) */ 时,可能会出现下面两种情形:

1. The SQL can run with reduced DOP (be downgraded) -该特性我在11gR1中测试过
2. The SQL can run in serial mode (be serialized)
* "ORA-12827: insufficient parallel query slaves available"
(If PARALLEL_MIN_PERCENT was specified)
但是在11gR2中,彻底发生了改变,首先我们来看看新引入的几个paralle相关的参数:
parallel_degree_policy
该参数属性为manual、auto、limited,11gR2中默认为manual。
---manual
Disables automatic degree of parallelism,statement queuing,in-memory parallel execution。
---auto
Enables automatic degree of parallelism,statement queuing,in-memory parallel execution.
---limited
当设置为该属性时,该特性将关闭,部分sql语句仍然可用使用,如表和索引的degree大于1的情况。
parallel_min_time_threshold
sql语句执行的最小时间(在使用了该特性时),换句话说,也就是只有当parallel_degree_policy
参数设置为auto或limited时,该参数默认值为auto,即是默认为10s。
parallel_degree_limit
该参数属性为CPU、IO、integer。默认值为CPU
---cpu 意为最大的DOP会根据系统cpu负载来进行自动调节
---io 意为最大的DOP会根据系统IO能力来进行自动调节
---integer 即可以在system或session级别指定为某个具体的数值
该参数是动态参数,可用在session级别进行更改。
parallel_force_local
顾名思义,该参数主要用于RAC环境,控制parallel server processes 是否能够跨节点,
其属性为true、false,默认值为false。该参数为动态参数。
parallel_servers_target
该参数的含义是可用的parallel server processes
该参数值=4 x CPU_COUNT x PARALLEL_THREADS_PER_CPU x ACTIVE_INSTANCES
parallel_max_servers
最大的parallel进程,parallel_servers_target select n.name, s.value
2 from v$mystat s, v$statname n
3 where s.statistic# = n.statistic#
4 and n.name like 'Parallel%'
5 /
NAME VALUE
---------------------------------------------------------------- ----------
Parallel operations not downgraded 0
Parallel operations downgraded to serial 0
Parallel operations downgraded 75 to 99 pct 0
Parallel operations downgraded 50 to 75 pct 0
Parallel operations downgraded 25 to 50 pct 0
Parallel operations downgraded 1 to 25 pct 0
6 rows selected.
SQL> alter session set "_px_trace"="none";
Session altered.
SQL> alter session set "_px_trace"=all;
Session altered.
SQL> SELECT /*+ parallel(4) */ count(*) from ht02;
COUNT(*)
----------
225824
SQL> alter session set "_px_trace"="none";
Session altered.
我们来看下trace信息:
*** 2011-09-03 00:51:23.921
kxfrDefaultDOP
DOP Trace -- compute default DOP
# CPU = 1
Threads/CPU = 2 ("parallel_threads_per_cpu")
default DOP = 2 (# CPU * Threads/CPU)
default DOP = 2 (DOP * # instance)
Default DOP = 2
kxfxqOnOrderQueue
KXFXQQUEUABLE() is TRUE.
pgadep: 0, pgatopsql: 1, pgapls 0.
SlaveSQL?: NO, Parallized?: YES, DOP: 4.
kxfxqOnOrderQueue
Admitting Parallel Statement (dop:4):
----- Current SQL Statement for this session (sql_id=432y0dmm1qdzk) -----
SELECT /*+ parallel(4) */ count(*) from ht02
kxfxqUpdateLoad
snapshot of RAC load before update: [ total queued PQ: 0, total running(
admitted) PQ: 0, total granted slaves in RAC: 0, total target in RAC: 0
]
kxfpiinfo
inst[cpus:mxslv]
1[1:20] ---表示1个cpu,parallel_max_servers参数值为20.
kxfpGetNumActiveSlaves
number of active slaves on the instance: 0
kxfpGetDefInstTarget
default inst target is 8, defDOP: 2, mxu: 2, cpus: 1
kxfpGetInstTarget
(default: 1) inst target is 8 --这里是parallel_servers_target值
kxfpclinfo
inst(load:user:pct:fact:queued:started:granted:active)aff
1 (1:0:100:100:0:0:0)
kxfpGetDefInstTarget
default inst target is 8, defDOP: 2, mxu: 2, cpus: 1
kxfpGetInstTarget
(default: 1) inst target is 8
kxfpMarkRACLoadStat
RAC load statistics is marked as valid.
........
Sending parse to slave set 1:
User sqllen sent from QC = 45
SELECT /*+ parallel(4) */ count(*) from ht02
kxfxpf [ 1430/ 60]
MSG( -->, KXFXOparse, DIALOG_HINT, slv=0 )
kxfxpf [ 1440/ 10]
MSG( -->, KXFXOparse, DIALOG_HINT, slv=1 )
kxfxpf [ 1440/ 0]
MSG( -->, KXFXOparse, DIALOG_HINT, slv=2 )
kxfxpf [ 1440/ 0]
MSG( -->, KXFXOparse, DIALOG_HINT, slv=3 )
kxfxgs [ 1640/ 200]
MSG( <--, KXFXORokcurs, ss#=1 slv=3 ok=yes )
kxfxgs [ 1640/ 0]
MSG( <--, KXFXORokcurs, ss#=1 slv=2 ok=yes )
kxfxgs [ 1640/ 0]
MSG( <--, KXFXORokcurs, ss#=1 slv=1 ok=yes )
kxfxgs [ 1640/ 0]
MSG( <--, KXFXORokcurs, ss#=1 slv=0 ok=yes ) qerpxSendParse [ 1640/ 0]
kxfpgsg [ 1640/ 0]
 Freeing Memory: il=0x4835d0 iload=0x48357c ilist=(nil) slist=(nil)
 set1_pids=0x4835ec set2_pids=(nil)
kxfrAllocSlaves [ 1640/ 0]
 actual num slaves alloc'd = 4 (kxfpqcthr) ----实际分配的slave进程个数
kxfrialo [ 1640/ 0]
 Finish: allocated actual 4 slaves for non-GV query

..............
*** 2011-09-03 00:51:25.390
kxfpg1sg [ 1280/ 30]
 received reply from qref 0x24f811b0
kxfpg1sg [ 1280/ 0]
 got 4 servers (sync), errors=0x0 returning
GROUP GET [ 1280/ 0]
 Acquired 4 slaves on 1 instances avg height=4 in 1 set q serial:513
 P000 inst 1 spid 4291 --这里是分配的4个salve进程spid
 P001 inst 1 spid 4293
 P002 inst 1 spid 4295
 P003 inst 1 spid 4297
 Insts 1
 Svrs 4 ---4个salve进程

当前虚拟机测试,我分配了4个parallel,现在加大该值为30,看看最后的DOP会是多少。
SQL> alter session set parallel_degree_policy = auto;

Session altered.
SQL> alter session set "_px_trace"=all;
Session altered.
SQL> SELECT /*+ parallel(30) */ count(*) from ht02;
COUNT(*)
----------
225824
SQL> alter session set "_px_trace"="none";
Session altered.
SQL>
再次来看看trace。
kxfpg1sg [ 4470/ 0]
got 20 servers (sync), errors=0x0 returning
GROUP GET [ 4490/ 20]
Acquired 20 slaves on 1 instances avg height=20 in 1 set q serial:1025
P000 inst 1 spid 4354
P001 inst 1 spid 4356
P002 inst 1 spid 4358
P003 inst 1 spid 4360
P004 inst 1 spid 4362
P005 inst 1 spid 4364
P006 inst 1 spid 4366
P007 inst 1 spid 4368
P008 inst 1 spid 4370
P009 inst 1 spid 4372
P010 inst 1 spid 4374
P011 inst 1 spid 4376
P012 inst 1 spid 4378
P013 inst 1 spid 4380
P014 inst 1 spid 4382
P015 inst 1 spid 4384
P016 inst 1 spid 4386
P017 inst 1 spid 4388
P018 inst 1 spid 4390
P019 inst 1 spid 4392
Insts 1
Svrs 20
可以看到只分配了20分slave进程,为啥呢?因为这里受到参数parallel_max_servers的限制。
SQL> show parameter parallel_max_servers
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
parallel_max_servers integer 100
SQL> conn roger/roger
SQL> alter session set parallel_degree_policy = auto;
Session altered.
SQL> alter session set "_px_trace"=all;
Session altered.
SQL> SELECT /*+ parallel(100) */ count(*) from ht02;
COUNT(*)
----------
225824
SQL> alter session set "_px_trace"="none";
Session altered.
此时trace 信息如下:
grep -i inst 1 spid roger_ora_4507.trc
roger_ora_4507.trc: Acquired 100 slaves on 1 instances avg height=100 in 1 set q serial:51
roger_ora_4507.trc: P000 inst 1 spid 4509
roger_ora_4507.trc: P001 inst 1 spid 4511
roger_ora_4507.trc: P002 inst 1 spid 4513
roger_ora_4507.trc: P003 inst 1 spid 4515
.......
roger_ora_4507.trc: P094 inst 1 spid 4698
roger_ora_4507.trc: P095 inst 1 spid 4700
roger_ora_4507.trc: P096 inst 1 spid 4702
roger_ora_4507.trc: P097 inst 1 spid 4704
roger_ora_4507.trc: P098 inst 1 spid 4706
roger_ora_4507.trc: P099 inst 1 spid 4708
下面来进行auto DOP的测试。
首先说明下DOP的计算公式:
单实例: DOP = PARALLEL_THREADS_PER_CPU x CPU_COUNT
RAC: DOP = PARALLEL_THREADS_PER_CPU x CPU_COUNT x INSTANCE_COUNT
SQL> alter session set parallel_degree_policy = auto;
Session altered.
SQL> alter session set "_px_trace"=all;
Session altered.
SQL> select count(*) from ht02;
Execution Plan
----------------------------------------------------------
Plan hash value: 583574080
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 446 (1)| 00:00:06 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| HT02 | 225K| 446 (1)| 00:00:06 |
-------------------------------------------------------------------
Note
-----
- automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold
此时自动计算出来的DOP 为1,说明当前sql语句在parallel为1的情况下效率最高。
SQL> set autot off
SQL> set timing on
SQL> select count(*) from ht02;
COUNT(*)
----------
225824
Elapsed: 00:00:00.08
SQL> SELECT /*+ parallel(2) */ count(*) from ht02;
COUNT(*)
----------
225824
Elapsed: 00:00:00.22
SQL> SELECT /*+ parallel(4) */ count(*) from ht02;
COUNT(*)
----------
225824
Elapsed: 00:00:00.74
SQL> SELECT /*+ parallel(6) */ count(*) from ht02;
COUNT(*)
----------
225824
Elapsed: 00:00:00.81
SQL> SELECT /*+ parallel(8) */ count(*) from ht02;
COUNT(*)
----------
225824
Elapsed: 00:00:00.92
SQL> SELECT /*+ parallel(10) */ count(*) from ht02;
COUNT(*)
----------
225824
Elapsed: 00:00:01.15
SQL> SELECT /*+ parallel(20) */ count(*) from ht02;
COUNT(*)
----------
225824
Elapsed: 00:00:03.24
SQL> SELECT /*+ parallel(40) */ count(*) from ht02;
COUNT(*)
----------
225824
Elapsed: 00:00:09.11
SQL> SELECT /*+ parallel(60) */ count(*) from ht02;
COUNT(*)
----------
225824
Elapsed: 00:00:23.75
SQL> SELECT /*+ parallel(80) */ count(*) from ht02;
COUNT(*)
----------
225824
Elapsed: 00:00:29.80
根据上述数据制作图表如下:
我们知道,当前测试是在parallel_min_time_threshold为默认值的情况下进行的测试,
下面更改该值。
SQL> alter session set parallel_min_time_threshold=3;
Session altered.
Elapsed: 00:00:00.02
SQL> alter system flush shared_pool;
此时的情况如下:
从上面来看,当该值修改为3以后,Dop为2时效率是最高的,下面来看看起执行计划是否如此。
SQL> set autot traceonly
SQL> alter system flush shared_pool;
System altered.
Elapsed: 00:00:00.55
SQL> select count(*) from ht02;

Elapsed: 00:00:01.50

Execution Plan
----------------------------------------------------------
Plan hash value: 2508058984

--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 248 (1)| 00:00:03 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 225K| 248 (1)| 00:00:03 | Q1,00 | PCWC | |
| 6 | TABLE ACCESS FULL| HT02 | 225K| 248 (1)| 00:00:03 | Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------

Note
-----
 - automatic DOP: Computed Degree of Parallelism is 2

Statistics
----------------------------------------------------------
 377 recursive calls
 12 db block gets
 1444 consistent gets
 1335 physical reads
 0 redo size
 424 bytes sent via SQL*Net to client
 415 bytes received via SQL*Net from client
 2 SQL*Net roundtrips to/from client
 6 sorts (memory)
 0 sorts (disk)
1rows processed
这里补充一点是,如果参数parallel_degree_policy为manual时,我们可以使用parallel hint来使用该特性,如下:
SQL> SELECT /*+ parallel(auto) */ count(*) from ht02;

Execution Plan
----------------------------------------------------------
Plan hash value: 583574080

-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 446 (1)| 00:00:06 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| HT02 | 225K| 446 (1)| 00:00:06 |
-------------------------------------------------------------------

Note
-----
 - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold

Statistics
----------------------------------------------------------
 0 recursive calls
 0 db block gets
 1338 consistent gets
 1335 physical reads
 0 redo size
 424 bytes sent via SQL*Net to client
 415 bytes received via SQL*Net from client
 2 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 1 rows processed

SQL> alter session set parallel_min_time_threshold=5;

Session altered.

SQL> SELECT /*+ parallel(auto) */ count(*) from ht02;

Execution Plan
----------------------------------------------------------
Plan hash value: 2508058984

--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 248 (1)| 00:00:03 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 225K| 248 (1)| 00:00:03 | Q1,00 | PCWC | |
| 6 | TABLE ACCESS FULL| HT02 | 225K| 248 (1)| 00:00:03 | Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------

Note
-----
 - automatic DOP: Computed Degree of Parallelism is 2

Statistics
----------------------------------------------------------
 8 recursive calls
 4 db block gets
 1394 consistent gets
 1335 physical reads
 0 redo size
 424 bytes sent via SQL*Net to client
 415 bytes received via SQL*Net from client
 2 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 1 rows processed
关于该特性,是针对DW环境的,auto dop的计算,其实在11gR1就有了,不过计算的算法不太合理,存在一定的缺陷。
最后我们来看下11gR2中auto dop是如何计算的。
kxfrDefaultDOP
DOP Trace -- compute default DOP
# CPU = 1
Threads/CPU = 2 ("parallel_threads_per_cpu")
default DOP = 2 (# CPU * Threads/CPU)
default DOP = 2 (DOP * # instance)
Default DOP = 2
kxfrDefaultDOP
DOP Trace -- compute default DOP
# CPU = 1
Threads/CPU = 2 ("parallel_threads_per_cpu")
default DOP = 2 (# CPU * Threads/CPU)
default DOP = 2 (DOP * # instance)
Default DOP = 2 ----- 系统默认的DOP
kxfxqOnOrderQueue
KXFXQQUEUABLE() is TRUE.
pgadep: 0, pgatopsql: 1, pgapls 0.
SlaveSQL?: NO, Parallized?: YES, DOP: 2.
kxfxqOnOrderQueue
Admitting Parallel Statement (dop:2):
----- Current SQL Statement for this session (sql_id=b4npv3kz33xb5) -----
select count(*) from ht02
kxfxqUpdateLoad ---更新负载情况
snapshot of RAC load before update: [ total queued PQ: 0, total running(
admitted) PQ: 0, total granted slaves in RAC: 0, total target in RAC: 8
]
kxfpuqpq ---更新queue PQ
instance load stat of queued PQ updated.
number of queued PQ is incremented from 0 to 1.
load stat(queued PQ) on all RAC has been updated.
snapshot of RAC load stat [ total queued PQ: 1, total running(admitted)
PQ: 0, total granted slaves in RAC: 0, total target in RAC: 8 ]
kxfxqsoc ----创建state object(应该就是一个SQL语句的标示类似SQL_ID)
state object created [stmt id: 16777230, exetime: 1315063063, queued? tr
ue, starting? false, granted slaves: 0]
kxfxqInstInfo ---检查实例信息
inst[cpus:mxslv]
1[1:100] ---1 描述为1个节点 100描述parallel_max_servers为100
kxfpGetNumActiveSlaves ---获得当前处于活动的savle进程
number of active slaves on the instance: 0
kxfpGetDefInstTarget ---获取实例默认parallel_target_servers参数值
default inst target is 8, defDOP: 2, mxu: 2, cpus: 1
kxfpGetInstTarget
(default: 1) inst target is 8
kxfxqLocalInstLoad ---获取本地节点的实例负载
local inst(load:user:pct:fact:queued:admitted:started:granted:active)
1 (0:0:100:0:1:0:0:0:0)
kxfxqInstLoad ---获取实例的负载信息(如果是RAC的话,应该跟上面的不同)
inst(load:user:pct:fact:queued:admitted:started:granted:active)
1(local) (0:0:100:0:1:0:0:0:0)
kxfxqInstList ---初始化实例负载信息
load information of 1 instances (single inst) initialized
kxfxqGrantedDOP ---根据系统负载情况计算合理的DOP值
Computing granted DOP.
kxfxqGrantedDOP ---根据前面的计算结果分配dop即是分配salve进程
RequestedDOP=2 GrantedDOP=2 Target=8 Load=0 GrantedSlv=0 AdmittedPQ=0 De
faultDOP=0 users=0 sets=1 force_admit=false
kxfxqUpdateLoad ---再次更新实例负载信息
snapshot of RAC load before update: [ total queued PQ: 1, total running(
admitted) PQ: 0, total granted slaves in RAC: 0, total target in RAC: 8
]
kxfpuqpq ---再次更新queue PQ
instance load stat of queued PQ updated.
number of queued PQ is decremented from 1 to 0.
load stat(queued PQ) on all RAC has been updated.
snapshot of RAC load stat [ total queued PQ: 0, total running(admitted)
PQ: 0, total granted slaves in RAC: 0, total target in RAC: 8 ]
kxfxqUpdateLoad ----第3次更新实例负载信息
snapshot of RAC load before update: [ total queued PQ: 0, total running(
admitted) PQ: 0, total granted slaves in RAC: 0, total target in RAC: 8
]
kxfpAdjustGrantedSlaves ---根据前面的多次调整,决定是否需要调整dop即salve进程
gslv is not adjusted.sga total gslv: 0, glsv: 2, adjusted gslv 2.
kxfpurpq
instance load stat of admitted PQ updated.
number of admitted PQ is incremented from 0 to 1, total granted slaves
is incremented from 0 to 2.
load stat(running(admitted) PQ) on all RAC has been updated.
snapshot of RAC load stat [ total queued PQ: 0, total running(admitted)
PQ: 1, total granted slaves in RAC: 2, total target in RAC: 8 ]
kxfxqsou
state object updated [stmt id: 16777230, exe time: 1315063063, queued? f
alse, starting? true, granted slaves: 2, remove state obj? false
kxfxqOnOrderQueue
Statement bypasses the queue.
Starting parallelizer rwsid:2 pxid:1
qerpxStart [ 0/ 0]
Start:
Starting SQL statement dump
SQL Information
user_id=85 user_name=ROGER module=SQL*Plus action=
sql_id=b4npv3kz33xb5 plan_hash_value=-1786908312 problem_type=3
----- Current SQL Statement for this session (sql_id=b4npv3kz33xb5) -----
select count(*) from ht02
sql_text_length=26
sql=select count(*) from ht02
----- Explain Plan Dump -----
----- Plan Table -----

============
Plan Table
============
------------------------------------------+-----------------------------------+-------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time | TQ |IN-OUT|PQ Distrib |
------------------------------------------+-----------------------------------+-------------------------+
| 0 | SELECT STATEMENT | | | | 248 | | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000| 1 | | | |:Q1000| P->S |QC (RANDOM)|
| 4 | SORT AGGREGATE | | 1 | | | |:Q1000| PCWP | |
| 5 | PX BLOCK ITERATOR | | 221K | | 248 | 00:00:03 |:Q1000| PCWC | |
| 6 | TABLE ACCESS FULL | HT02 | 221K | | 248 | 00:00:03 |:Q1000| PCWP | |
------------------------------------------+-----------------------------------+-------------------------+
...............
GROUP GET [ 470/ 0]
Acquired 2 slaves on 1 instances avg height=2 in 1 set q serial:23041
P000 inst 1 spid 7835
P001 inst 1 spid 7837
Insts 1
Svrs 2
kxfpValidateSlaveGroup [ 470/ 0]
qcq:0x2292fc94 flg:0
qerpxSendParse [ 470/ 0]
qcq=0x2292fc94 pxid=1 mflg=0x0 #slaves=2
kxfxcp1 [ 470/ 0]
Sending parse to nprocs:2 slave_set:1
kxfxcPutSession [ 520/ 50]
................
kxfpqsrls [ 820/ 10]
Release Slave q=0x2292fc94 qr=0x2247e860 action=1 slave=1 inst=1 ----这里是释放salve进程
kxfpqsrls [ 820/ 0]
Release Slave q=0x2292fc94 qr=0x22481710 action=1 slave=0 inst=1
GROUP RELEASE [ 820/ 0]
all slaves released q serial 23041
kxfpqsod_qc_sod [ 820/ 0]
clean up of q=0x2292fc94 completed
kxfrfir [ 820/ 0]
cbk fired: 0x9d8b24
kxfxqsou [ 820/ 0]
state object removed [stmt id: 16777230, exe time: 1315063063] ---删除创建的state object
kxfrfir [ 820/ 0]
cbk fired: 0x9d8b14
kxfxqRPQcbk [ 820/ 0]
Decr admitted parallel statement load
kxfxqUpdateLoad [ 820/ 0]--更新负载信息
snapshot of RAC load before update: [ total queued PQ: 0, total running(
admitted) PQ: 1, total granted slaves in RAC: 2, total target in RAC: 8
]
kxfpAdjustGrantedSlaves [ 820/ 0]
gslv is not adjusted.sga total gslv: 2, glsv: 2, adjusted gslv 2.
kxfpurpq [ 820/ 0]
instance load stat of admitted PQ updated. ---更新queue PQ
number of admitted PQ is decremented from 1 to 0, total granted slaves
is decremented from 2 to 0.
load stat(running(admitted) PQ) on all RAC has been updated.
snapshot of RAC load stat [ total queued PQ: 0, total running(admitted)
PQ: 0, total granted slaves in RAC: 0, total target in RAC: 8 ]
根据前面的实验我们可以清楚的看到11gR2中 auto dop的操作过程,做出如下的简单总结:
1. 根据系统默认dop进行计算计算出一个dop;
2. 获取当前实例的负载信息(rac环境会获取所有节点的负载信息);
3. 根据负载信息,结合前面的dop进行计算,得出一个dop值;
4. 多次更新实例负载信息,这里应该还会参考过去的负载信息进行多次计算。
从上面来看,这里计算了2次,我猜测此时这2分实例负载信息类似该实例过去的一份awr快照一样;
5. 最后计算出一个合理的dop值,然后生成执行计划;
6. 执行sql语句(执行之前会进行salve进程的分配);
7. 执行完毕以后,释放salve进程;
8. 更新实例负载信息(我猜测这次的负载信息会被下次计算作为参考).
个人见解,希望能够抛砖引玉!
# Ultralytics YOLOv5 🚀, AGPL-3.0 license """ Train a YOLOv5 model on a custom dataset. Models and datasets download automatically from the latest YOLOv5 release. Usage - Single-GPU training: $ python train.py --data coco128.yaml --weights yolov5s.pt --img 640 # from pretrained (recommended) $ python train.py --data coco128.yaml --weights '' --cfg yolov5s.yaml --img 640 # from scratch Usage - Multi-GPU DDP training: $ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 train.py --data coco128.yaml --weights yolov5s.pt --img 640 --device 0,1,2,3 Models: https://github.com/ultralytics/yolov5/tree/master/models Datasets: https://github.com/ultralytics/yolov5/tree/master/data Tutorial: https://docs.ultralytics.com/yolov5/tutorials/train_custom_data """ import argparse import math import os os.environ["GIT_PYTHON_REFRESH"] = "quiet" os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" import random import subprocess import sys import time from copy import deepcopy from datetime import datetime, timedelta from pathlib import Path try: import comet_ml # must be imported before torch (if installed) except ImportError: comet_ml = None import numpy as np import torch import torch.distributed as dist import torch.nn as nn import yaml from torch.optim import lr_scheduler from tqdm import tqdm FILE = Path(__file__).resolve() ROOT = FILE.parents[0] # YOLOv5 root directory if str(ROOT) not in sys.path: sys.path.append(str(ROOT)) # add ROOT to PATH ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative import val as validate # for end-of-epoch mAP from models.experimental import attempt_load from models.yolo import Model from utils.autoanchor import check_anchors from utils.autobatch import check_train_batch_size from utils.callbacks import Callbacks from utils.dataloaders import create_dataloader from utils.downloads import attempt_download, is_url from utils.general import ( LOGGER, TQDM_BAR_FORMAT, check_amp, check_dataset, check_file, check_git_info, check_git_status, check_img_size, check_requirements, check_suffix, check_yaml, colorstr, get_latest_run, increment_path, init_seeds, intersect_dicts, labels_to_class_weights, labels_to_image_weights, methods, one_cycle, print_args, print_mutation, strip_optimizer, yaml_save, ) from utils.loggers import LOGGERS, Loggers from utils.loggers.comet.comet_utils import check_comet_resume from utils.loss import ComputeLoss from utils.metrics import fitness from utils.plots import plot_evolve from utils.torch_utils import ( EarlyStopping, ModelEMA, de_parallel, select_device, smart_DDP, smart_optimizer, smart_resume, torch_distributed_zero_first, ) LOCAL_RANK = int(os.getenv("LOCAL_RANK", -1)) # https://pytorch.org/docs/stable/elastic/run.html RANK = int(os.getenv("RANK", -1)) WORLD_SIZE = int(os.getenv("WORLD_SIZE", 1)) GIT_INFO = check_git_info() def train(hyp, opt, device, callbacks): """ Train a YOLOv5 model on a custom dataset using specified hyperparameters, options, and device, managing datasets, model architecture, loss computation, and optimizer steps. Args: hyp (str | dict): Path to the hyperparameters YAML file or a dictionary of hyperparameters. opt (argparse.Namespace): Parsed command-line arguments containing training options. device (torch.device): Device on which training occurs, e.g., 'cuda' or 'cpu'. callbacks (Callbacks): Callback functions for various training events. Returns: None Models and datasets download automatically from the latest YOLOv5 release. Example: Single-GPU training: ```bash $ python train.py --data coco128.yaml --weights yolov5s.pt --img 640 # from pretrained (recommended) $ python train.py --data coco128.yaml --weights '' --cfg yolov5s.yaml --img 640 # from scratch ``` Multi-GPU DDP training: ```bash $ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 train.py --data coco128.yaml --weights yolov5s.pt --img 640 --device 0,1,2,3 ``` For more usage details, refer to: - Models: https://github.com/ultralytics/yolov5/tree/master/models - Datasets: https://github.com/ultralytics/yolov5/tree/master/data - Tutorial: https://docs.ultralytics.com/yolov5/tutorials/train_custom_data """ save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze = ( Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze, ) callbacks.run("on_pretrain_routine_start") # Directories w = save_dir / "weights" # weights dir (w.parent if evolve else w).mkdir(parents=True, exist_ok=True) # make dir last, best = w / "last.pt", w / "best.pt" # Hyperparameters if isinstance(hyp, str): with open(hyp, errors="ignore") as f: hyp = yaml.safe_load(f) # load hyps dict LOGGER.info(colorstr("hyperparameters: ") + ", ".join(f"{k}={v}" for k, v in hyp.items())) opt.hyp = hyp.copy() # for saving hyps to checkpoints # Save run settings if not evolve: yaml_save(save_dir / "hyp.yaml", hyp) yaml_save(save_dir / "opt.yaml", vars(opt)) # Loggers data_dict = None if RANK in {-1, 0}: include_loggers = list(LOGGERS) if getattr(opt, "ndjson_console", False): include_loggers.append("ndjson_console") if getattr(opt, "ndjson_file", False): include_loggers.append("ndjson_file") loggers = Loggers( save_dir=save_dir, weights=weights, opt=opt, hyp=hyp, logger=LOGGER, include=tuple(include_loggers), ) # Register actions for k in methods(loggers): callbacks.register_action(k, callback=getattr(loggers, k)) # Process custom dataset artifact link data_dict = loggers.remote_dataset if resume: # If resuming runs from remote artifact weights, epochs, hyp, batch_size = opt.weights, opt.epochs, opt.hyp, opt.batch_size # Config plots = not evolve and not opt.noplots # create plots cuda = device.type != "cpu" init_seeds(opt.seed + 1 + RANK, deterministic=True) with torch_distributed_zero_first(LOCAL_RANK): data_dict = data_dict or check_dataset(data) # check if None train_path, val_path = data_dict["train"], data_dict["val"] nc = 1 if single_cls else int(data_dict["nc"]) # number of classes names = {0: "item"} if single_cls and len(data_dict["names"]) != 1 else data_dict["names"] # class names is_coco = isinstance(val_path, str) and val_path.endswith("coco/val2017.txt") # COCO dataset # Model check_suffix(weights, ".pt") # check weights pretrained = weights.endswith(".pt") if pretrained: with torch_distributed_zero_first(LOCAL_RANK): weights = attempt_download(weights) # download if not found locally ckpt = torch.load(weights, map_location="cpu") # load checkpoint to CPU to avoid CUDA memory leak model = Model(cfg or ckpt["model"].yaml, ch=3, nc=nc, anchors=hyp.get("anchors")).to(device) # create exclude = ["anchor"] if (cfg or hyp.get("anchors")) and not resume else [] # exclude keys csd = ckpt["model"].float().state_dict() # checkpoint state_dict as FP32 csd = intersect_dicts(csd, model.state_dict(), exclude=exclude) # intersect model.load_state_dict(csd, strict=False) # load LOGGER.info(f"Transferred {len(csd)}/{len(model.state_dict())} items from {weights}") # report else: model = Model(cfg, ch=3, nc=nc, anchors=hyp.get("anchors")).to(device) # create amp = check_amp(model) # check AMP # Freeze freeze = [f"model.{x}." for x in (freeze if len(freeze) > 1 else range(freeze[0]))] # layers to freeze for k, v in model.named_parameters(): v.requires_grad = True # train all layers # v.register_hook(lambda x: torch.nan_to_num(x)) # NaN to 0 (commented for erratic training results) if any(x in k for x in freeze): LOGGER.info(f"freezing {k}") v.requires_grad = False # Image size gs = max(int(model.stride.max()), 32) # grid size (max stride) imgsz = check_img_size(opt.imgsz, gs, floor=gs * 2) # verify imgsz is gs-multiple # Batch size if RANK == -1 and batch_size == -1: # single-GPU only, estimate best batch size batch_size = check_train_batch_size(model, imgsz, amp) loggers.on_params_update({"batch_size": batch_size}) # Optimizer nbs = 64 # nominal batch size accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing hyp["weight_decay"] *= batch_size * accumulate / nbs # scale weight_decay optimizer = smart_optimizer(model, opt.optimizer, hyp["lr0"], hyp["momentum"], hyp["weight_decay"]) # Scheduler if opt.cos_lr: lf = one_cycle(1, hyp["lrf"], epochs) # cosine 1->hyp['lrf'] else: def lf(x): """Linear learning rate scheduler function with decay calculated by epoch proportion.""" return (1 - x / epochs) * (1.0 - hyp["lrf"]) + hyp["lrf"] # linear scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs) # EMA ema = ModelEMA(model) if RANK in {-1, 0} else None # Resume best_fitness, start_epoch = 0.0, 0 if pretrained: if resume: best_fitness, start_epoch, epochs = smart_resume(ckpt, optimizer, ema, weights, epochs, resume) del ckpt, csd # DP mode if cuda and RANK == -1 and torch.cuda.device_count() > 1: LOGGER.warning( "WARNING ⚠️ DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n" "See Multi-GPU Tutorial at https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training to get started." ) model = torch.nn.DataParallel(model) # SyncBatchNorm if opt.sync_bn and cuda and RANK != -1: model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device) LOGGER.info("Using SyncBatchNorm()") # Trainloader train_loader, dataset = create_dataloader( train_path, imgsz, batch_size // WORLD_SIZE, gs, single_cls, hyp=hyp, augment=True, cache=None if opt.cache == "val" else opt.cache, rect=opt.rect, rank=LOCAL_RANK, workers=workers, image_weights=opt.image_weights, quad=opt.quad, prefix=colorstr("train: "), shuffle=True, seed=opt.seed, ) labels = np.concatenate(dataset.labels, 0) mlc = int(labels[:, 0].max()) # max label class assert mlc < nc, f"Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}" # Process 0 if RANK in {-1, 0}: val_loader = create_dataloader( val_path, imgsz, batch_size // WORLD_SIZE * 2, gs, single_cls, hyp=hyp, cache=None if noval else opt.cache, rect=True, rank=-1, workers=workers * 2, pad=0.5, prefix=colorstr("val: "), )[0] if not resume: if not opt.noautoanchor: check_anchors(dataset, model=model, thr=hyp["anchor_t"], imgsz=imgsz) # run AutoAnchor model.half().float() # pre-reduce anchor precision callbacks.run("on_pretrain_routine_end", labels, names) # DDP mode if cuda and RANK != -1: model = smart_DDP(model) # Model attributes nl = de_parallel(model).model[-1].nl # number of detection layers (to scale hyps) hyp["box"] *= 3 / nl # scale to layers hyp["cls"] *= nc / 80 * 3 / nl # scale to classes and layers hyp["obj"] *= (imgsz / 640) ** 2 * 3 / nl # scale to image size and layers hyp["label_smoothing"] = opt.label_smoothing model.nc = nc # attach number of classes to model model.hyp = hyp # attach hyperparameters to model model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights model.names = names # Start training t0 = time.time() nb = len(train_loader) # number of batches nw = max(round(hyp["warmup_epochs"] * nb), 100) # number of warmup iterations, max(3 epochs, 100 iterations) # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training last_opt_step = -1 maps = np.zeros(nc) # mAP per class results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls) scheduler.last_epoch = start_epoch - 1 # do not move scaler = torch.cuda.amp.GradScaler(enabled=amp) stopper, stop = EarlyStopping(patience=opt.patience), False compute_loss = ComputeLoss(model) # init loss class callbacks.run("on_train_start") LOGGER.info( f'Image sizes {imgsz} train, {imgsz} val\n' f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n' f"Logging results to {colorstr('bold', save_dir)}\n" f'Starting training for {epochs} epochs...' ) for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------ callbacks.run("on_train_epoch_start") model.train() # Update image weights (optional, single-GPU only) if opt.image_weights: cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx # Update mosaic border (optional) # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs) # dataset.mosaic_border = [b - imgsz, -b] # height, width borders mloss = torch.zeros(3, device=device) # mean losses if RANK != -1: train_loader.sampler.set_epoch(epoch) pbar = enumerate(train_loader) LOGGER.info(("\n" + "%11s" * 7) % ("Epoch", "GPU_mem", "box_loss", "obj_loss", "cls_loss", "Instances", "Size")) if RANK in {-1, 0}: pbar = tqdm(pbar, total=nb, bar_format=TQDM_BAR_FORMAT) # progress bar optimizer.zero_grad() for i, (imgs, targets, paths, _) in pbar: # batch ------------------------------------------------------------- callbacks.run("on_train_batch_start") ni = i + nb * epoch # number integrated batches (since train start) imgs = imgs.to(device, non_blocking=True).float() / 255 # uint8 to float32, 0-255 to 0.0-1.0 # Warmup if ni <= nw: xi = [0, nw] # x interp # compute_loss.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou) accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round()) for j, x in enumerate(optimizer.param_groups): # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0 x["lr"] = np.interp(ni, xi, [hyp["warmup_bias_lr"] if j == 0 else 0.0, x["initial_lr"] * lf(epoch)]) if "momentum" in x: x["momentum"] = np.interp(ni, xi, [hyp["warmup_momentum"], hyp["momentum"]]) # Multi-scale if opt.multi_scale: sz = random.randrange(int(imgsz * 0.5), int(imgsz * 1.5) + gs) // gs * gs # size sf = sz / max(imgs.shape[2:]) # scale factor if sf != 1: ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple) imgs = nn.functional.interpolate(imgs, size=ns, mode="bilinear", align_corners=False) # Forward with torch.cuda.amp.autocast(amp): pred = model(imgs) # forward loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size if RANK != -1: loss *= WORLD_SIZE # gradient averaged between devices in DDP mode if opt.quad: loss *= 4.0 # Backward scaler.scale(loss).backward() # Optimize - https://pytorch.org/docs/master/notes/amp_examples.html if ni - last_opt_step >= accumulate: scaler.unscale_(optimizer) # unscale gradients torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10.0) # clip gradients scaler.step(optimizer) # optimizer.step scaler.update() optimizer.zero_grad() if ema: ema.update(model) last_opt_step = ni # Log if RANK in {-1, 0}: mloss = (mloss * i + loss_items) / (i + 1) # update mean losses mem = f"{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G" # (GB) pbar.set_description( ("%11s" * 2 + "%11.4g" * 5) % (f"{epoch}/{epochs - 1}", mem, *mloss, targets.shape[0], imgs.shape[-1]) ) callbacks.run("on_train_batch_end", model, ni, imgs, targets, paths, list(mloss)) if callbacks.stop_training: return # end batch ------------------------------------------------------------------------------------------------ # Scheduler lr = [x["lr"] for x in optimizer.param_groups] # for loggers scheduler.step() if RANK in {-1, 0}: # mAP callbacks.run("on_train_epoch_end", epoch=epoch) ema.update_attr(model, include=["yaml", "nc", "hyp", "names", "stride", "class_weights"]) final_epoch = (epoch + 1 == epochs) or stopper.possible_stop if not noval or final_epoch: # Calculate mAP results, maps, _ = validate.run( data_dict, batch_size=batch_size // WORLD_SIZE * 2, imgsz=imgsz, half=amp, model=ema.ema, single_cls=single_cls, dataloader=val_loader, save_dir=save_dir, plots=False, callbacks=callbacks, compute_loss=compute_loss, ) # Update best mAP fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95] stop = stopper(epoch=epoch, fitness=fi) # early stop check if fi > best_fitness: best_fitness = fi log_vals = list(mloss) + list(results) + lr callbacks.run("on_fit_epoch_end", log_vals, epoch, best_fitness, fi) # Save model if (not nosave) or (final_epoch and not evolve): # if save ckpt = { "epoch": epoch, "best_fitness": best_fitness, "model": deepcopy(de_parallel(model)).half(), "ema": deepcopy(ema.ema).half(), "updates": ema.updates, "optimizer": optimizer.state_dict(), "opt": vars(opt), "git": GIT_INFO, # {remote, branch, commit} if a git repo "date": datetime.now().isoformat(), } # Save last, best and delete torch.save(ckpt, last) if best_fitness == fi: torch.save(ckpt, best) if opt.save_period > 0 and epoch % opt.save_period == 0: torch.save(ckpt, w / f"epoch{epoch}.pt") del ckpt callbacks.run("on_model_save", last, epoch, final_epoch, best_fitness, fi) # EarlyStopping if RANK != -1: # if DDP training broadcast_list = [stop if RANK == 0 else None] dist.broadcast_object_list(broadcast_list, 0) # broadcast 'stop' to all ranks if RANK != 0: stop = broadcast_list[0] if stop: break # must break all DDP ranks # end epoch ---------------------------------------------------------------------------------------------------- # end training ----------------------------------------------------------------------------------------------------- if RANK in {-1, 0}: LOGGER.info(f"\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.") for f in last, best: if f.exists(): strip_optimizer(f) # strip optimizers if f is best: LOGGER.info(f"\nValidating {f}...") results, _, _ = validate.run( data_dict, batch_size=batch_size // WORLD_SIZE * 2, imgsz=imgsz, model=attempt_load(f, device).half(), iou_thres=0.65 if is_coco else 0.60, # best pycocotools at iou 0.65 single_cls=single_cls, dataloader=val_loader, save_dir=save_dir, save_json=is_coco, verbose=True, plots=plots, callbacks=callbacks, compute_loss=compute_loss, ) # val best model with plots if is_coco: callbacks.run("on_fit_epoch_end", list(mloss) + list(results) + lr, epoch, best_fitness, fi) callbacks.run("on_train_end", last, best, epoch, results) torch.cuda.empty_cache() return results def parse_opt(known=False): """ Parse command-line arguments for YOLOv5 training, validation, and testing. Args: known (bool, optional): If True, parses known arguments, ignoring the unknown. Defaults to False. Returns: (argparse.Namespace): Parsed command-line arguments containing options for YOLOv5 execution. Example: ```python from ultralytics.yolo import parse_opt opt = parse_opt() print(opt) ``` Links: - Models: https://github.com/ultralytics/yolov5/tree/master/models - Datasets: https://github.com/ultralytics/yolov5/tree/master/data - Tutorial: https://docs.ultralytics.com/yolov5/tutorials/train_custom_data """ parser = argparse.ArgumentParser() parser.add_argument("--weights", type=str, default=ROOT / r"E:/yolov5-master/yolov5m.pt", help="initial weights path") parser.add_argument("--cfg", type=str, default=r"models/yolov5m.yaml", help="model.yaml path") parser.add_argument("--data", type=str, default=ROOT / r"E:/yolov5-master/data/data.yaml", help="dataset.yaml path") parser.add_argument("--hyp", type=str, default=ROOT / "data/hyps/hyp.scratch-low.yaml", help="hyperparameters path") parser.add_argument("--epochs", type=int, default=150, help="total training epochs") parser.add_argument("--batch-size", type=int, default=16, help="total batch size for all GPUs, -1 for autobatch") parser.add_argument("--imgsz", "--img", "--img-size", type=int, default=640, help="train, val image size (pixels)") parser.add_argument("--rect", action="store_true", help="rectangular training") parser.add_argument("--resume", nargs="?", const=True, default=False, help="resume most recent training") parser.add_argument("--nosave", action="store_true", help="only save final checkpoint") parser.add_argument("--noval", action="store_true", help="only validate final epoch") parser.add_argument("--noautoanchor", action="store_true", help="disable AutoAnchor") parser.add_argument("--noplots", action="store_true", help="save no plot files") parser.add_argument("--evolve", type=int, nargs="?", const=300, help="evolve hyperparameters for x generations") parser.add_argument( "--evolve_population", type=str, default=ROOT / "data/hyps", help="location for loading population" ) parser.add_argument("--resume_evolve", type=str, default=None, help="resume evolve from last generation") parser.add_argument("--bucket", type=str, default="", help="gsutil bucket") parser.add_argument("--cache", type=str, nargs="?", const="ram", help="image --cache ram/disk") parser.add_argument("--image-weights", action="store_true", help="use weighted image selection for training") parser.add_argument("--device", default="", help="cuda device, i.e. 0 or 0,1,2,3 or cpu") parser.add_argument("--multi-scale", action="store_true", help="vary img-size +/- 50%%") parser.add_argument("--single-cls", action="store_true", help="train multi-class data as single-class") parser.add_argument("--optimizer", type=str, choices=["SGD", "Adam", "AdamW"], default="SGD", help="optimizer") parser.add_argument("--sync-bn", action="store_true", help="use SyncBatchNorm, only available in DDP mode") parser.add_argument("--workers", type=int, default=8, help="max dataloader workers (per RANK in DDP mode)") parser.add_argument("--project", default=ROOT / "runs/train", help="save to project/name") parser.add_argument("--name", default="exp", help="save to project/name") parser.add_argument("--exist-ok", action="store_true", help="existing project/name ok, do not increment") parser.add_argument("--quad", action="store_true", help="quad dataloader") parser.add_argument("--cos-lr", action="store_true", help="cosine LR scheduler") parser.add_argument("--label-smoothing", type=float, default=0.0, help="Label smoothing epsilon") parser.add_argument("--patience", type=int, default=100, help="EarlyStopping patience (epochs without improvement)") parser.add_argument("--freeze", nargs="+", type=int, default=[0], help="Freeze layers: backbone=10, first3=0 1 2") parser.add_argument("--save-period", type=int, default=-1, help="Save checkpoint every x epochs (disabled if < 1)") parser.add_argument("--seed", type=int, default=0, help="Global training seed") parser.add_argument("--local_rank", type=int, default=-1, help="Automatic DDP Multi-GPU argument, do not modify") # Logger arguments parser.add_argument("--entity", default=None, help="Entity") parser.add_argument("--upload_dataset", nargs="?", const=True, default=False, help='Upload data, "val" option') parser.add_argument("--bbox_interval", type=int, default=-1, help="Set bounding-box image logging interval") parser.add_argument("--artifact_alias", type=str, default="latest", help="Version of dataset artifact to use") # NDJSON logging parser.add_argument("--ndjson-console", action="store_true", help="Log ndjson to console") parser.add_argument("--ndjson-file", action="store_true", help="Log ndjson to file") return parser.parse_known_args()[0] if known else parser.parse_args() def main(opt, callbacks=Callbacks()): """ Runs the main entry point for training or hyperparameter evolution with specified options and optional callbacks. Args: opt (argparse.Namespace): The command-line arguments parsed for YOLOv5 training and evolution. callbacks (ultralytics.utils.callbacks.Callbacks, optional): Callback functions for various training stages. Defaults to Callbacks(). Returns: None Note: For detailed usage, refer to: https://github.com/ultralytics/yolov5/tree/master/models """ if RANK in {-1, 0}: print_args(vars(opt)) check_git_status() check_requirements(ROOT / "requirements.txt") # Resume (from specified or most recent last.pt) if opt.resume and not check_comet_resume(opt) and not opt.evolve: last = Path(check_file(opt.resume) if isinstance(opt.resume, str) else get_latest_run()) opt_yaml = last.parent.parent / "opt.yaml" # train options yaml opt_data = opt.data # original dataset if opt_yaml.is_file(): with open(opt_yaml, errors="ignore") as f: d = yaml.safe_load(f) else: d = torch.load(last, map_location="cpu")["opt"] opt = argparse.Namespace(**d) # replace opt.cfg, opt.weights, opt.resume = "", str(last), True # reinstate if is_url(opt_data): opt.data = check_file(opt_data) # avoid HUB resume auth timeout else: opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = ( check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project), ) # checks assert len(opt.cfg) or len(opt.weights), "either --cfg or --weights must be specified" if opt.evolve: if opt.project == str(ROOT / "runs/train"): # if default project name, rename to runs/evolve opt.project = str(ROOT / "runs/evolve") opt.exist_ok, opt.resume = opt.resume, False # pass resume to exist_ok and disable resume if opt.name == "cfg": opt.name = Path(opt.cfg).stem # use model.yaml as name opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # DDP mode device = select_device(opt.device, batch_size=opt.batch_size) if LOCAL_RANK != -1: msg = "is not compatible with YOLOv5 Multi-GPU DDP training" assert not opt.image_weights, f"--image-weights {msg}" assert not opt.evolve, f"--evolve {msg}" assert opt.batch_size != -1, f"AutoBatch with --batch-size -1 {msg}, please pass a valid --batch-size" assert opt.batch_size % WORLD_SIZE == 0, f"--batch-size {opt.batch_size} must be multiple of WORLD_SIZE" assert torch.cuda.device_count() > LOCAL_RANK, "insufficient CUDA devices for DDP command" torch.cuda.set_device(LOCAL_RANK) device = torch.device("cuda", LOCAL_RANK) dist.init_process_group( backend="nccl" if dist.is_nccl_available() else "gloo", timeout=timedelta(seconds=10800) ) # Train if not opt.evolve: train(opt.hyp, opt, device, callbacks) # Evolve hyperparameters (optional) else: # Hyperparameter evolution metadata (including this hyperparameter True-False, lower_limit, upper_limit) meta = { "lr0": (False, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3) "lrf": (False, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf) "momentum": (False, 0.6, 0.98), # SGD momentum/Adam beta1 "weight_decay": (False, 0.0, 0.001), # optimizer weight decay "warmup_epochs": (False, 0.0, 5.0), # warmup epochs (fractions ok) "warmup_momentum": (False, 0.0, 0.95), # warmup initial momentum "warmup_bias_lr": (False, 0.0, 0.2), # warmup initial bias lr "box": (False, 0.02, 0.2), # box loss gain "cls": (False, 0.2, 4.0), # cls loss gain "cls_pw": (False, 0.5, 2.0), # cls BCELoss positive_weight "obj": (False, 0.2, 4.0), # obj loss gain (scale with pixels) "obj_pw": (False, 0.5, 2.0), # obj BCELoss positive_weight "iou_t": (False, 0.1, 0.7), # IoU training threshold "anchor_t": (False, 2.0, 8.0), # anchor-multiple threshold "anchors": (False, 2.0, 10.0), # anchors per output grid (0 to ignore) "fl_gamma": (False, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5) "hsv_h": (True, 0.0, 0.1), # image HSV-Hue augmentation (fraction) "hsv_s": (True, 0.0, 0.9), # image HSV-Saturation augmentation (fraction) "hsv_v": (True, 0.0, 0.9), # image HSV-Value augmentation (fraction) "degrees": (True, 0.0, 45.0), # image rotation (+/- deg) "translate": (True, 0.0, 0.9), # image translation (+/- fraction) "scale": (True, 0.0, 0.9), # image scale (+/- gain) "shear": (True, 0.0, 10.0), # image shear (+/- deg) "perspective": (True, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001 "flipud": (True, 0.0, 1.0), # image flip up-down (probability) "fliplr": (True, 0.0, 1.0), # image flip left-right (probability) "mosaic": (True, 0.0, 1.0), # image mosaic (probability) "mixup": (True, 0.0, 1.0), # image mixup (probability) "copy_paste": (True, 0.0, 1.0), # segment copy-paste (probability) } # GA configs pop_size = 50 mutation_rate_min = 0.01 mutation_rate_max = 0.5 crossover_rate_min = 0.5 crossover_rate_max = 1 min_elite_size = 2 max_elite_size = 5 tournament_size_min = 2 tournament_size_max = 10 with open(opt.hyp, errors="ignore") as f: hyp = yaml.safe_load(f) # load hyps dict if "anchors" not in hyp: # anchors commented in hyp.yaml hyp["anchors"] = 3 if opt.noautoanchor: del hyp["anchors"], meta["anchors"] opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir) # only val/save final epoch # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices evolve_yaml, evolve_csv = save_dir / "hyp_evolve.yaml", save_dir / "evolve.csv" if opt.bucket: # download evolve.csv if exists subprocess.run( [ "gsutil", "cp", f"gs://{opt.bucket}/evolve.csv", str(evolve_csv), ] ) # Delete the items in meta dictionary whose first value is False del_ = [item for item, value_ in meta.items() if value_[0] is False] hyp_GA = hyp.copy() # Make a copy of hyp dictionary for item in del_: del meta[item] # Remove the item from meta dictionary del hyp_GA[item] # Remove the item from hyp_GA dictionary # Set lower_limit and upper_limit arrays to hold the search space boundaries lower_limit = np.array([meta[k][1] for k in hyp_GA.keys()]) upper_limit = np.array([meta[k][2] for k in hyp_GA.keys()]) # Create gene_ranges list to hold the range of values for each gene in the population gene_ranges = [(lower_limit[i], upper_limit[i]) for i in range(len(upper_limit))] # Initialize the population with initial_values or random values initial_values = [] # If resuming evolution from a previous checkpoint if opt.resume_evolve is not None: assert os.path.isfile(ROOT / opt.resume_evolve), "evolve population path is wrong!" with open(ROOT / opt.resume_evolve, errors="ignore") as f: evolve_population = yaml.safe_load(f) for value in evolve_population.values(): value = np.array([value[k] for k in hyp_GA.keys()]) initial_values.append(list(value)) # If not resuming from a previous checkpoint, generate initial values from .yaml files in opt.evolve_population else: yaml_files = [f for f in os.listdir(opt.evolve_population) if f.endswith(".yaml")] for file_name in yaml_files: with open(os.path.join(opt.evolve_population, file_name)) as yaml_file: value = yaml.safe_load(yaml_file) value = np.array([value[k] for k in hyp_GA.keys()]) initial_values.append(list(value)) # Generate random values within the search space for the rest of the population if initial_values is None: population = [generate_individual(gene_ranges, len(hyp_GA)) for _ in range(pop_size)] elif pop_size > 1: population = [generate_individual(gene_ranges, len(hyp_GA)) for _ in range(pop_size - len(initial_values))] for initial_value in initial_values: population = [initial_value] + population # Run the genetic algorithm for a fixed number of generations list_keys = list(hyp_GA.keys()) for generation in range(opt.evolve): if generation >= 1: save_dict = {} for i in range(len(population)): little_dict = {list_keys[j]: float(population[i][j]) for j in range(len(population[i]))} save_dict[f"gen{str(generation)}number{str(i)}"] = little_dict with open(save_dir / "evolve_population.yaml", "w") as outfile: yaml.dump(save_dict, outfile, default_flow_style=False) # Adaptive elite size elite_size = min_elite_size + int((max_elite_size - min_elite_size) * (generation / opt.evolve)) # Evaluate the fitness of each individual in the population fitness_scores = [] for individual in population: for key, value in zip(hyp_GA.keys(), individual): hyp_GA[key] = value hyp.update(hyp_GA) results = train(hyp.copy(), opt, device, callbacks) callbacks = Callbacks() # Write mutation results keys = ( "metrics/precision", "metrics/recall", "metrics/mAP_0.5", "metrics/mAP_0.5:0.95", "val/box_loss", "val/obj_loss", "val/cls_loss", ) print_mutation(keys, results, hyp.copy(), save_dir, opt.bucket) fitness_scores.append(results[2]) # Select the fittest individuals for reproduction using adaptive tournament selection selected_indices = [] for _ in range(pop_size - elite_size): # Adaptive tournament size tournament_size = max( max(2, tournament_size_min), int(min(tournament_size_max, pop_size) - (generation / (opt.evolve / 10))), ) # Perform tournament selection to choose the best individual tournament_indices = random.sample(range(pop_size), tournament_size) tournament_fitness = [fitness_scores[j] for j in tournament_indices] winner_index = tournament_indices[tournament_fitness.index(max(tournament_fitness))] selected_indices.append(winner_index) # Add the elite individuals to the selected indices elite_indices = [i for i in range(pop_size) if fitness_scores[i] in sorted(fitness_scores)[-elite_size:]] selected_indices.extend(elite_indices) # Create the next generation through crossover and mutation next_generation = [] for _ in range(pop_size): parent1_index = selected_indices[random.randint(0, pop_size - 1)] parent2_index = selected_indices[random.randint(0, pop_size - 1)] # Adaptive crossover rate crossover_rate = max( crossover_rate_min, min(crossover_rate_max, crossover_rate_max - (generation / opt.evolve)) ) if random.uniform(0, 1) < crossover_rate: crossover_point = random.randint(1, len(hyp_GA) - 1) child = population[parent1_index][:crossover_point] + population[parent2_index][crossover_point:] else: child = population[parent1_index] # Adaptive mutation rate mutation_rate = max( mutation_rate_min, min(mutation_rate_max, mutation_rate_max - (generation / opt.evolve)) ) for j in range(len(hyp_GA)): if random.uniform(0, 1) < mutation_rate: child[j] += random.uniform(-0.1, 0.1) child[j] = min(max(child[j], gene_ranges[j][0]), gene_ranges[j][1]) next_generation.append(child) # Replace the old population with the new generation population = next_generation # Print the best solution found best_index = fitness_scores.index(max(fitness_scores)) best_individual = population[best_index] print("Best solution found:", best_individual) # Plot results plot_evolve(evolve_csv) LOGGER.info( f'Hyperparameter evolution finished {opt.evolve} generations\n' f"Results saved to {colorstr('bold', save_dir)}\n" f'Usage example: $ python train.py --hyp {evolve_yaml}' ) def generate_individual(input_ranges, individual_length): """ Generate an individual with random hyperparameters within specified ranges. Args: input_ranges (list[tuple[float, float]]): List of tuples where each tuple contains the lower and upper bounds for the corresponding gene (hyperparameter). individual_length (int): The number of genes (hyperparameters) in the individual. Returns: list[float]: A list representing a generated individual with random gene values within the specified ranges. Example: ```python input_ranges = [(0.01, 0.1), (0.1, 1.0), (0.9, 2.0)] individual_length = 3 individual = generate_individual(input_ranges, individual_length) print(individual) # Output: [0.035, 0.678, 1.456] (example output) ``` Note: The individual returned will have a length equal to `individual_length`, with each gene value being a floating-point number within its specified range in `input_ranges`. """ individual = [] for i in range(individual_length): lower_bound, upper_bound = input_ranges[i] individual.append(random.uniform(lower_bound, upper_bound)) return individual def run(**kwargs): """ Execute YOLOv5 training with specified options, allowing optional overrides through keyword arguments. Args: weights (str, optional): Path to initial weights. Defaults to ROOT / 'yolov5s.pt'. cfg (str, optional): Path to model YAML configuration. Defaults to an empty string. data (str, optional): Path to dataset YAML configuration. Defaults to ROOT / 'data/coco128.yaml'. hyp (str, optional): Path to hyperparameters YAML configuration. Defaults to ROOT / 'data/hyps/hyp.scratch-low.yaml'. epochs (int, optional): Total number of training epochs. Defaults to 100. batch_size (int, optional): Total batch size for all GPUs. Use -1 for automatic batch size determination. Defaults to 16. imgsz (int, optional): Image size (pixels) for training and validation. Defaults to 640. rect (bool, optional): Use rectangular training. Defaults to False. resume (bool | str, optional): Resume most recent training with an optional path. Defaults to False. nosave (bool, optional): Only save the final checkpoint. Defaults to False. noval (bool, optional): Only validate at the final epoch. Defaults to False. noautoanchor (bool, optional): Disable AutoAnchor. Defaults to False. noplots (bool, optional): Do not save plot files. Defaults to False. evolve (int, optional): Evolve hyperparameters for a specified number of generations. Use 300 if provided without a value. evolve_population (str, optional): Directory for loading population during evolution. Defaults to ROOT / 'data/ hyps'. resume_evolve (str, optional): Resume hyperparameter evolution from the last generation. Defaults to None. bucket (str, optional): gsutil bucket for saving checkpoints. Defaults to an empty string. cache (str, optional): Cache image data in 'ram' or 'disk'. Defaults to None. image_weights (bool, optional): Use weighted image selection for training. Defaults to False. device (str, optional): CUDA device identifier, e.g., '0', '0,1,2,3', or 'cpu'. Defaults to an empty string. multi_scale (bool, optional): Use multi-scale training, varying image size by ±50%. Defaults to False. single_cls (bool, optional): Train with multi-class data as single-class. Defaults to False. optimizer (str, optional): Optimizer type, choices are ['SGD', 'Adam', 'AdamW']. Defaults to 'SGD'. sync_bn (bool, optional): Use synchronized BatchNorm, only available in DDP mode. Defaults to False. workers (int, optional): Maximum dataloader workers per rank in DDP mode. Defaults to 8. project (str, optional): Directory for saving training runs. Defaults to ROOT / 'runs/train'. name (str, optional): Name for saving the training run. Defaults to 'exp'. exist_ok (bool, optional): Allow existing project/name without incrementing. Defaults to False. quad (bool, optional): Use quad dataloader. Defaults to False. cos_lr (bool, optional): Use cosine learning rate scheduler. Defaults to False. label_smoothing (float, optional): Label smoothing epsilon value. Defaults to 0.0. patience (int, optional): Patience for early stopping, measured in epochs without improvement. Defaults to 100. freeze (list, optional): Layers to freeze, e.g., backbone=10, first 3 layers = [0, 1, 2]. Defaults to [0]. save_period (int, optional): Frequency in epochs to save checkpoints. Disabled if < 1. Defaults to -1. seed (int, optional): Global training random seed. Defaults to 0. local_rank (int, optional): Automatic DDP Multi-GPU argument. Do not modify. Defaults to -1. Returns: None: The function initiates YOLOv5 training or hyperparameter evolution based on the provided options. Examples: ```python import train train.run(data='coco128.yaml', imgsz=320, weights='yolov5m.pt') ``` Notes: - Models: https://github.com/ultralytics/yolov5/tree/master/models - Datasets: https://github.com/ultralytics/yolov5/tree/master/data - Tutorial: https://docs.ultralytics.com/yolov5/tutorials/train_custom_data """ opt = parse_opt(True) for k, v in kwargs.items(): setattr(opt, k, v) main(opt) return opt if __name__ == "__main__": opt = parse_opt() main(opt) 这是源代码
06-21
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license """ Train a YOLOv5 model on a custom dataset. Models and datasets download automatically from the latest YOLOv5 release. Usage - Single-GPU training: $ python train.py --data coco128.yaml --weights yolov5s.pt --img 640 # from pretrained (recommended) $ python train.py --data coco128.yaml --weights '' --cfg yolov5s.yaml --img 640 # from scratch Usage - Multi-GPU DDP training: $ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 train.py --data coco128.yaml --weights yolov5s.pt --img 640 --device 0,1,2,3 Models: https://github.com/ultralytics/yolov5/tree/master/models Datasets: https://github.com/ultralytics/yolov5/tree/master/data Tutorial: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data """ import argparse import math import os os.environ["GIT_PYTHON_REFRESH"] = "quiet" # add there import random import sys import time from copy import deepcopy from datetime import datetime from pathlib import Path import numpy as np import torch import torch.distributed as dist import torch.nn as nn import yaml from torch.optim import lr_scheduler from tqdm import tqdm # import numpy # import torch.serialization # torch.serialization.add_safe_globals([numpy._core.multiarray._reconstruct]) FILE = Path(__file__).resolve() ROOT = FILE.parents[0] # YOLOv5 root directory if str(ROOT) not in sys.path: sys.path.append(str(ROOT)) # add ROOT to PATH ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative import val as validate # for end-of-epoch mAP from models.experimental import attempt_load from models.yolo import Model from utils.autoanchor import check_anchors from utils.autobatch import check_train_batch_size from utils.callbacks import Callbacks from utils.dataloaders import create_dataloader from utils.downloads import attempt_download, is_url from utils.general import (LOGGER, TQDM_BAR_FORMAT, check_amp, check_dataset, check_file, check_git_info, check_git_status, check_img_size, check_requirements, check_suffix, check_yaml, colorstr, get_latest_run, increment_path, init_seeds, intersect_dicts, labels_to_class_weights, labels_to_image_weights, methods, one_cycle, print_args, print_mutation, strip_optimizer, yaml_save) from utils.loggers import Loggers from utils.loggers.comet.comet_utils import check_comet_resume from utils.loss import ComputeLoss from utils.metrics import fitness from utils.plots import plot_evolve from utils.torch_utils import (EarlyStopping, ModelEMA, de_parallel, select_device, smart_DDP, smart_optimizer, smart_resume, torch_distributed_zero_first) LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html RANK = int(os.getenv('RANK', -1)) WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1)) GIT_INFO = check_git_info() def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictionary save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze = \ Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \ opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze callbacks.run('on_pretrain_routine_start') # Directories w = save_dir / 'weights' # weights dir (w.parent if evolve else w).mkdir(parents=True, exist_ok=True) # make dir last, best = w / 'last.pt', w / 'best.pt' # Hyperparameters if isinstance(hyp, str): with open(hyp, errors='ignore') as f: hyp = yaml.safe_load(f) # load hyps dict LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items())) opt.hyp = hyp.copy() # for saving hyps to checkpoints # Save run settings if not evolve: yaml_save(save_dir / 'hyp.yaml', hyp) yaml_save(save_dir / 'opt.yaml', vars(opt)) # Loggers data_dict = None if RANK in {-1, 0}: loggers = Loggers(save_dir, weights, opt, hyp, LOGGER) # loggers instance # Register actions for k in methods(loggers): callbacks.register_action(k, callback=getattr(loggers, k)) # Process custom dataset artifact link data_dict = loggers.remote_dataset if resume: # If resuming runs from remote artifact weights, epochs, hyp, batch_size = opt.weights, opt.epochs, opt.hyp, opt.batch_size # Config plots = not evolve and not opt.noplots # create plots cuda = device.type != 'cpu' init_seeds(opt.seed + 1 + RANK, deterministic=True) with torch_distributed_zero_first(LOCAL_RANK): data_dict = data_dict or check_dataset(data) # check if None train_path, val_path = data_dict['train'], data_dict['val'] nc = 1 if single_cls else int(data_dict['nc']) # number of classes names = {0: 'item'} if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names is_coco = isinstance(val_path, str) and val_path.endswith('coco/val2017.txt') # COCO dataset # Model check_suffix(weights, '.pt') # check weights pretrained = weights.endswith('.pt') if pretrained: with torch_distributed_zero_first(LOCAL_RANK): weights = attempt_download(weights) # download if not found locally ckpt = torch.load(weights, map_location='cpu', weights_only=False) # load checkpoint to CPU to avoid CUDA memory leak model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else [] # exclude keys csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32 csd = intersect_dicts(csd, model.state_dict(), exclude=exclude) # intersect model.load_state_dict(csd, strict=False) # load LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}') # report else: model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create amp = check_amp(model) # check AMP # Freeze freeze = [f'model.{x}.' for x in (freeze if len(freeze) > 1 else range(freeze[0]))] # layers to freeze for k, v in model.named_parameters(): v.requires_grad = True # train all layers # v.register_hook(lambda x: torch.nan_to_num(x)) # NaN to 0 (commented for erratic training results) if any(x in k for x in freeze): LOGGER.info(f'freezing {k}') v.requires_grad = False # Image size gs = max(int(model.stride.max()), 32) # grid size (max stride) imgsz = check_img_size(opt.imgsz, gs, floor=gs * 2) # verify imgsz is gs-multiple # Batch size if RANK == -1 and batch_size == -1: # single-GPU only, estimate best batch size batch_size = check_train_batch_size(model, imgsz, amp) loggers.on_params_update({"batch_size": batch_size}) # Optimizer nbs = 64 # nominal batch size accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing hyp['weight_decay'] *= batch_size * accumulate / nbs # scale weight_decay optimizer = smart_optimizer(model, opt.optimizer, hyp['lr0'], hyp['momentum'], hyp['weight_decay']) # Scheduler if opt.cos_lr: lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf'] else: lf = lambda x: (1 - x / epochs) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs) # EMA ema = ModelEMA(model) if RANK in {-1, 0} else None # Resume best_fitness, start_epoch = 0.0, 0 if pretrained: if resume: best_fitness, start_epoch, epochs = smart_resume(ckpt, optimizer, ema, weights, epochs, resume) del ckpt, csd # DP mode if cuda and RANK == -1 and torch.cuda.device_count() > 1: LOGGER.warning('WARNING ⚠️ DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n' 'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.') model = torch.nn.DataParallel(model) # SyncBatchNorm if opt.sync_bn and cuda and RANK != -1: model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device) LOGGER.info('Using SyncBatchNorm()') # Trainloader train_loader, dataset = create_dataloader(train_path, imgsz, batch_size // WORLD_SIZE, gs, single_cls, hyp=hyp, augment=True, cache=None if opt.cache == 'val' else opt.cache, rect=opt.rect, rank=LOCAL_RANK, workers=workers, image_weights=opt.image_weights, quad=opt.quad, prefix=colorstr('train: '), shuffle=True) labels = np.concatenate(dataset.labels, 0) mlc = int(labels[:, 0].max()) # max label class assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}' # Process 0 if RANK in {-1, 0}: val_loader = create_dataloader(val_path, imgsz, batch_size // WORLD_SIZE * 2, gs, single_cls, hyp=hyp, cache=None if noval else opt.cache, rect=True, rank=-1, workers=workers * 2, pad=0.5, prefix=colorstr('val: '))[0] if not resume: if not opt.noautoanchor: check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz) # run AutoAnchor model.half().float() # pre-reduce anchor precision callbacks.run('on_pretrain_routine_end', labels, names) # DDP mode if cuda and RANK != -1: model = smart_DDP(model) # Model attributes nl = de_parallel(model).model[-1].nl # number of detection layers (to scale hyps) hyp['box'] *= 3 / nl # scale to layers hyp['cls'] *= nc / 80 * 3 / nl # scale to classes and layers hyp['obj'] *= (imgsz / 640) ** 2 * 3 / nl # scale to image size and layers hyp['label_smoothing'] = opt.label_smoothing model.nc = nc # attach number of classes to model model.hyp = hyp # attach hyperparameters to model model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights model.names = names # Start training t0 = time.time() nb = len(train_loader) # number of batches nw = max(round(hyp['warmup_epochs'] * nb), 100) # number of warmup iterations, max(3 epochs, 100 iterations) # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training last_opt_step = -1 maps = np.zeros(nc) # mAP per class results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls) scheduler.last_epoch = start_epoch - 1 # do not move scaler = torch.cuda.amp.GradScaler(enabled=amp) stopper, stop = EarlyStopping(patience=opt.patience), False compute_loss = ComputeLoss(model) # init loss class callbacks.run('on_train_start') LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n' f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n' f"Logging results to {colorstr('bold', save_dir)}\n" f'Starting training for {epochs} epochs...') for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------ callbacks.run('on_train_epoch_start') model.train() # Update image weights (optional, single-GPU only) if opt.image_weights: cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx # Update mosaic border (optional) # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs) # dataset.mosaic_border = [b - imgsz, -b] # height, width borders mloss = torch.zeros(3, device=device) # mean losses if RANK != -1: train_loader.sampler.set_epoch(epoch) pbar = enumerate(train_loader) LOGGER.info(('\n' + '%11s' * 7) % ('Epoch', 'GPU_mem', 'box_loss', 'obj_loss', 'cls_loss', 'Instances', 'Size')) if RANK in {-1, 0}: pbar = tqdm(pbar, total=nb, bar_format=TQDM_BAR_FORMAT) # progress bar optimizer.zero_grad() for i, (imgs, targets, paths, _) in pbar: # batch ------------------------------------------------------------- callbacks.run('on_train_batch_start') ni = i + nb * epoch # number integrated batches (since train start) imgs = imgs.to(device, non_blocking=True).float() / 255 # uint8 to float32, 0-255 to 0.0-1.0 # Warmup if ni <= nw: xi = [0, nw] # x interp # compute_loss.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou) accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round()) for j, x in enumerate(optimizer.param_groups): # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0 x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 0 else 0.0, x['initial_lr'] * lf(epoch)]) if 'momentum' in x: x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']]) # Multi-scale if opt.multi_scale: sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size sf = sz / max(imgs.shape[2:]) # scale factor if sf != 1: ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple) imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False) # Forward # with torch.cuda.amp.autocast(amp): with torch.amp.autocast(device_type='cuda'): pred = model(imgs) # forward loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size if RANK != -1: loss *= WORLD_SIZE # gradient averaged between devices in DDP mode if opt.quad: loss *= 4. # Backward scaler.scale(loss).backward() # Optimize - https://pytorch.org/docs/master/notes/amp_examples.html if ni - last_opt_step >= accumulate: scaler.unscale_(optimizer) # unscale gradients torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10.0) # clip gradients scaler.step(optimizer) # optimizer.step scaler.update() optimizer.zero_grad() if ema: ema.update(model) last_opt_step = ni # Log if RANK in {-1, 0}: mloss = (mloss * i + loss_items) / (i + 1) # update mean losses mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G' # (GB) pbar.set_description(('%11s' * 2 + '%11.4g' * 5) % (f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1])) callbacks.run('on_train_batch_end', model, ni, imgs, targets, paths, list(mloss)) if callbacks.stop_training: return # end batch ------------------------------------------------------------------------------------------------ # Scheduler lr = [x['lr'] for x in optimizer.param_groups] # for loggers scheduler.step() if RANK in {-1, 0}: # mAP callbacks.run('on_train_epoch_end', epoch=epoch) ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights']) final_epoch = (epoch + 1 == epochs) or stopper.possible_stop if not noval or final_epoch: # Calculate mAP results, maps, _ = validate.run(data_dict, batch_size=batch_size // WORLD_SIZE * 2, imgsz=imgsz, half=amp, model=ema.ema, single_cls=single_cls, dataloader=val_loader, save_dir=save_dir, plots=False, callbacks=callbacks, compute_loss=compute_loss) # Update best mAP fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95] stop = stopper(epoch=epoch, fitness=fi) # early stop check if fi > best_fitness: best_fitness = fi log_vals = list(mloss) + list(results) + lr callbacks.run('on_fit_epoch_end', log_vals, epoch, best_fitness, fi) # Save model if (not nosave) or (final_epoch and not evolve): # if save ckpt = { 'epoch': epoch, 'best_fitness': best_fitness, 'model': deepcopy(de_parallel(model)).half(), 'ema': deepcopy(ema.ema).half(), 'updates': ema.updates, 'optimizer': optimizer.state_dict(), 'opt': vars(opt), 'git': GIT_INFO, # {remote, branch, commit} if a git repo 'date': datetime.now().isoformat()} # Save last, best and delete torch.save(ckpt, last) if best_fitness == fi: torch.save(ckpt, best) if opt.save_period > 0 and epoch % opt.save_period == 0: torch.save(ckpt, w / f'epoch{epoch}.pt') del ckpt callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi) # EarlyStopping if RANK != -1: # if DDP training broadcast_list = [stop if RANK == 0 else None] dist.broadcast_object_list(broadcast_list, 0) # broadcast 'stop' to all ranks if RANK != 0: stop = broadcast_list[0] if stop: break # must break all DDP ranks # end epoch ---------------------------------------------------------------------------------------------------- # end training ----------------------------------------------------------------------------------------------------- if RANK in {-1, 0}: LOGGER.info(f'\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.') for f in last, best: if f.exists(): strip_optimizer(f) # strip optimizers if f is best: LOGGER.info(f'\nValidating {f}...') results, _, _ = validate.run( data_dict, batch_size=batch_size // WORLD_SIZE * 2, imgsz=imgsz, model=attempt_load(f, device).half(), iou_thres=0.65 if is_coco else 0.60, # best pycocotools at iou 0.65 single_cls=single_cls, dataloader=val_loader, save_dir=save_dir, save_json=is_coco, verbose=True, plots=plots, callbacks=callbacks, compute_loss=compute_loss) # val best model with plots if is_coco: callbacks.run('on_fit_epoch_end', list(mloss) + list(results) + lr, epoch, best_fitness, fi) callbacks.run('on_train_end', last, best, epoch, results) torch.cuda.empty_cache() return results def parse_opt(known=False): parser = argparse.ArgumentParser() parser.add_argument('--weights', type=str, default='./weights/yolov5s.pt', help='initial weights path') parser.add_argument('--cfg', type=str, default='./models/yolov5s.yaml', help='model.yaml path') parser.add_argument('--data', type=str, default=r'C:data/AAAA.yaml', help='data.yaml path') parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path') parser.add_argument('--epochs', type=int, default=100, help='total training epochs') parser.add_argument('--batch-size', type=int, default=1, help='total batch size for all GPUs, -1 for autobatch') parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)') parser.add_argument('--rect', action='store_true', help='rectangular training') parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') parser.add_argument('--noval', action='store_true', help='only validate final epoch') parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor') parser.add_argument('--noplots', action='store_true', help='save no plot files') parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations') parser.add_argument('--bucket', type=str, default='', help='gsutil bucket') parser.add_argument('--cache', type=str, nargs='?', const='ram', help='image --cache ram/disk') parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training') parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%') parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class') parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer') parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode') parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)') parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name') parser.add_argument('--name', default='welding_defect_yolov5s_20241101_300', help='save to project/name') parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') parser.add_argument('--quad', action='store_true', help='quad dataloader') parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler') parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon') parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)') parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2') parser.add_argument('--save-period', type=int, default=5, help='Save checkpoint every x epochs (disabled if < 1)') parser.add_argument('--seed', type=int, default=0, help='Global training seed') parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify') # Logger arguments parser.add_argument('--entity', default=None, help='Entity') parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='Upload data, "val" option') parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval') parser.add_argument('--artifact_alias', type=str, default='latest', help='Version of dataset artifact to use') return parser.parse_known_args()[0] if known else parser.parse_args() def main(opt, callbacks=Callbacks()): # Checks if RANK in {-1, 0}: print_args(vars(opt)) check_git_status() check_requirements() # Resume (from specified or most recent last.pt) if opt.resume and not check_comet_resume(opt) and not opt.evolve: last = Path(check_file(opt.resume) if isinstance(opt.resume, str) else get_latest_run()) opt_yaml = last.parent.parent / 'opt.yaml' # train options yaml opt_data = opt.data # original dataset if opt_yaml.is_file(): with open(opt_yaml, errors='ignore') as f: d = yaml.safe_load(f) else: d = torch.load(last, map_location='cpu')['opt'] opt = argparse.Namespace(**d) # replace opt.cfg, opt.weights, opt.resume = '', str(last), True # reinstate if is_url(opt_data): opt.data = check_file(opt_data) # avoid HUB resume auth timeout else: opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \ check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified' if opt.evolve: if opt.project == str(ROOT / 'runs/train'): # if default project name, rename to runs/evolve opt.project = str(ROOT / 'runs/evolve') opt.exist_ok, opt.resume = opt.resume, False # pass resume to exist_ok and disable resume if opt.name == 'cfg': opt.name = Path(opt.cfg).stem # use model.yaml as name opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # DDP mode device = select_device(opt.device, batch_size=opt.batch_size) if LOCAL_RANK != -1: msg = 'is not compatible with YOLOv5 Multi-GPU DDP training' assert not opt.image_weights, f'--image-weights {msg}' assert not opt.evolve, f'--evolve {msg}' assert opt.batch_size != -1, f'AutoBatch with --batch-size -1 {msg}, please pass a valid --batch-size' assert opt.batch_size % WORLD_SIZE == 0, f'--batch-size {opt.batch_size} must be multiple of WORLD_SIZE' assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command' torch.cuda.set_device(LOCAL_RANK) device = torch.device('cuda', LOCAL_RANK) dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo") # Train if not opt.evolve: train(opt.hyp, opt, device, callbacks) # Evolve hyperparameters (optional) else: # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit) meta = { 'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3) 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf) 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok) 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr 'box': (1, 0.02, 0.2), # box loss gain 'cls': (1, 0.2, 4.0), # cls loss gain 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels) 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight 'iou_t': (0, 0.1, 0.7), # IoU training threshold 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore) 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5) 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction) 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction) 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction) 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg) 'translate': (1, 0.0, 0.9), # image translation (+/- fraction) 'scale': (1, 0.0, 0.9), # image scale (+/- gain) 'shear': (1, 0.0, 10.0), # image shear (+/- deg) 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001 'flipud': (1, 0.0, 1.0), # image flip up-down (probability) 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability) 'mosaic': (1, 0.0, 1.0), # image mixup (probability) 'mixup': (1, 0.0, 1.0), # image mixup (probability) 'copy_paste': (1, 0.0, 1.0)} # segment copy-paste (probability) with open(opt.hyp, errors='ignore') as f: hyp = yaml.safe_load(f) # load hyps dict if 'anchors' not in hyp: # anchors commented in hyp.yaml hyp['anchors'] = 3 if opt.noautoanchor: del hyp['anchors'], meta['anchors'] opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir) # only val/save final epoch # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv' if opt.bucket: os.system(f'gsutil cp gs://{opt.bucket}/evolve.csv {evolve_csv}') # download evolve.csv if exists for _ in range(opt.evolve): # generations to evolve if evolve_csv.exists(): # if evolve.csv exists: select best hyps and mutate # Select parent(s) parent = 'single' # parent selection method: 'single' or 'weighted' x = np.loadtxt(evolve_csv, ndmin=2, delimiter=',', skiprows=1) n = min(5, len(x)) # number of previous results to consider x = x[np.argsort(-fitness(x))][:n] # top n mutations w = fitness(x) - fitness(x).min() + 1E-6 # weights (sum > 0) if parent == 'single' or len(x) == 1: # x = x[random.randint(0, n - 1)] # random selection x = x[random.choices(range(n), weights=w)[0]] # weighted selection elif parent == 'weighted': x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination # Mutate mp, s = 0.8, 0.2 # mutation probability, sigma npr = np.random npr.seed(int(time.time())) g = np.array([meta[k][0] for k in hyp.keys()]) # gains 0-1 ng = len(meta) v = np.ones(ng) while all(v == 1): # mutate until a change occurs (prevent duplicates) v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0) for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300) hyp[k] = float(x[i + 7] * v[i]) # mutate # Constrain to limits for k, v in meta.items(): hyp[k] = max(hyp[k], v[1]) # lower limit hyp[k] = min(hyp[k], v[2]) # upper limit hyp[k] = round(hyp[k], 5) # significant digits # Train mutation results = train(hyp.copy(), opt, device, callbacks) callbacks = Callbacks() # Write mutation results keys = ('metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', 'val/box_loss', 'val/obj_loss', 'val/cls_loss') print_mutation(keys, results, hyp.copy(), save_dir, opt.bucket) # Plot results plot_evolve(evolve_csv) LOGGER.info(f'Hyperparameter evolution finished {opt.evolve} generations\n' f"Results saved to {colorstr('bold', save_dir)}\n" f'Usage example: $ python train.py --hyp {evolve_yaml}') def run(**kwargs): # Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov5m.pt') opt = parse_opt(True) for k, v in kwargs.items(): setattr(opt, k, v) main(opt) return opt if __name__ == "__main__": opt = parse_opt() main(opt) 为什么训练之后,他的runs里面并没有显示best.pt跟last.pt 请查找原因
最新发布
06-28
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值