现在有如下AWR报表,1小时采样,那么注意看DB Time高达322分钟
| Snap Id | Snap Time | Sessions | Cursors/Session | |
|---|---|---|---|---|
| Begin Snap: | 11522 | 12-Dec-10 20:00:55 | 136 | 7.7 |
| End Snap: | 11523 | 12-Dec-10 21:00:12 | 141 | 9.4 |
| Elapsed: | 59.29 (mins) | |||
| DB Time: | 322.74 (mins) |
Report Summary
Cache Sizes
| Begin | End | |||
|---|---|---|---|---|
| Buffer Cache: | 3,136M | 3,136M | Std Block Size: | 8K |
| Shared Pool Size: | 2,960M | 2,960M | Log Buffer: | 14,376K |
Load Profile
| Per Second | Per Transaction | |
|---|---|---|
| Redo size: | 51,870.17 | 836.59 |
| Logical reads: | 64,573.55 | 1,041.48 |
| Block changes: | 528.77 | 8.53 |
| Physical reads: | 10,772.43 | 173.74 |
| Physical writes: | 141.54 | 2.28 |
| User calls: | 424.24 | 6.84 |
| Parses: | 68.53 | 1.11 |
| Hard parses: | 58.53 | 0.94 |
| Sorts: | 121.67 | 1.96 |
| Logons: | 0.64 | 0.01 |
| Executes: | 154.82 | 2.50 |
| Transactions: | 62.00 |
每秒高达1万多次的physical I/O,那么1小时内有 60*60*10772=38779200次的物理I/O,
Top 5 events也基本上都是I/O相关
Top 5 Timed Events
| Event | Waits | Time(s) | Avg Wait(ms) | % Total Call Time | Wait Class |
|---|---|---|---|---|---|
| CPU time | 6,994 | 36.1 | |||
| db file sequential read | 6,663,574 | 4,787 | 1 | 24.7 | User I/O |
| read by other session | 1,282,214 | 2,938 | 2 | 15.2 | User I/O |
| db file scattered read | 3,065,892 | 2,441 | 1 | 12.6 | User I/O |
| PX Deq Credit: send blkd | 390,345 | 1,889 | 5 | 9.8 | Other |
Segments by Logical Reads
- Total Logical Reads: 229,704,447
- Captured Segments account for 76.5% of Total
| Owner | Tablespace Name | Object Name | Subobject Name | Obj. Type | Logical Reads | %Total |
|---|---|---|---|---|---|---|
| REFDATAGL | DATAGL | THINGT | TABLE | 86,291,168 | 37.57 | |
| REFDATAGL | INDEXESGL | THINGT_PK | INDEX | 36,475,968 | 15.88 | |
| REFDATAGL | DATAGL | MDM_CONTEXTT | TABLE | 17,210,880 | 7.49 | |
| REFDATAGL | INDEXESGL | MDM_CONTEXTT_IDX | INDEX | 13,952,880 | 6.07 | |
| KALIDO_REPORT | DATAGL | OSGH_PROD_ALL_LVL_HI_HADS | TABLE | 2,585,424 | 1.13 |
Segments by Physical Reads
- Total Physical Reads: 38,320,260
- Captured Segments account for 90.9% of Total
| Owner | Tablespace Name | Object Name | Subobject Name | Obj. Type | Physical Reads | %Total |
|---|---|---|---|---|---|---|
| REFDATAGL | DATAGL | THINGT | TABLE | 28,245,360 | 73.71 | |
| REFDATA | DATA | THINGT | TABLE | 1,129,528 | 2.95 | |
| WHSUSRGL | SHOP_AREAGL | MV_OSGH_GTIN | TABLE | 894,780 | 2.34 | |
| REFDATAGL | DATAGL | MDM_CONTEXTT | TABLE | 639,977 | 1.67 | |
| KALIDO_REPORT | DATAGL | OSGH_PROD_ALL_LVL_HI_HADS | TABLE | 626,106 | 1.63 |
1小时内,表REFDATAGL.THINGT 的物理读高达28245360次,逻辑读高达86291168次
最理想的情况就是 将上面所有表都放在Keep pool中Keep起来.
不过很遗憾 SGA 才6G, Buffer Cache 才3136M
SQL> show parameter sga
NAME TYPE VALUE
------------------------------------ --------------------------------- ------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 6G
sga_target big integer 6G
SQL> select name ,bytes/1024/1024 Mb from v$sgainfo where name ='Buffer Cache Size';
NAME MB
-------------------- ----------
Buffer Cache Size 3136
SQL> select owner,segment_name,bytes/1024/1024/1024 "Size(G)" from dba_segments where segment_name='THINGT';
OWNER SEGMENT_NAME Size(G)
-------------------- -------------------- ----------
REFDATA THINGT .434570313
REFDATAGL THINGT 3.88061523
REFDATA2 THINGT .164428711
REFDATA4 THINGT .006347656
SQL> select owner,segment_name,bytes/1024/1024/1024 "Size(G)" from dba_segments where segment_name='MV_OSGH_GTIN';
OWNER SEGMENT_NAME Size(G)
-------------------- -------------------- ----------
WHSUSRGL MV_OSGH_GTIN .463500977
SQL> select owner,segment_name,bytes/1024/1024/1024 "Size(G)" from dba_segments where segment_name='MDM_CONTEXTT';
OWNER SEGMENT_NAME Size(G)
-------------------- -------------------- ----------
REFDATA MDM_CONTEXTT .106933594
REFDATAGL MDM_CONTEXTT 2.1574707
REFDATA2 MDM_CONTEXTT .066162109
REFDATA4 MDM_CONTEXTT .000244141
看来想Keep 这些表除了增加SGA别无他法了。
有时候做优化,不升级硬件还真的不行。这个系统目前最大的瓶颈毫无疑问,肯定是I/O,而且I/O集中在几个热点表(系统核心表)上面,
所以这样的表一定要Keep 到keep pool中,哪怕它很大(3.8G)。
1703

被折叠的 条评论
为什么被折叠?



