百万千万级数据量 数据库优化(二)

1、我做过的项目里有个与人民银行的数据交换,他们用的是分批次的纯文本(直接用c从数据库里导出来)。
譬如201003-1.txt,201003-2.txt...
每个文本控制在10M左右,ftp传输。
简单可靠。

2、(1)

优化前的语句:
Java代码
# Query_time: 5.967435 Lock_time: 0.000129 Rows_sent: 1 Rows_examined: 803401
SET timestamp=1286843575;
select livemessag0_.id as id38_, livemessag0_.isactive as isactive38_, livemessag0_.content as content38_, livemessag0_.createtime as createtime38_, livemessag0_.userid as userid38_, livemessag0_.objectid as objectid38_, livemessag0_.recordid as recordid38_, livemessag0_.type as type38_ from live_message livemessag0_ where (livemessag0_.objectid in (select livescrip1_.id from live_scrip livescrip1_ where livescrip1_.senderid='ff8080812aebac2d012aef6491b3666d')) and livemessag0_.type=2 limit 6;

优化后的语句:
Java代码
select livemessag0_.id as id38_,
livemessag0_.isactive as isactive38_,
livemessag0_.content as content38_,
livemessag0_.createtime as createtime38_,
livemessag0_.userid as userid38_,
livemessag0_.objectid as objectid38_,
livemessag0_.recordid as recordid38_,
livemessag0_.type as type38_
from live_scrip livescrip1_ left join
live_message livemessag0_
on livescrip1_.id=livemessag0_.objectid
where livescrip1_.senderid = 'ff8080812aebac2d012aef6491b3666d' and
livemessag0_.type = 2
limit 6;

总结:尽量少用子查询用表连接的方式代替(如果表连接不是太复杂的话),这样优化后大概能减少1/3的时间,后来发现livemessag0_.objectid竟然没有建立索引。

3、大数据量修改批量提交

博客分类: oracle
SQLBlog
引自:http://sunxboy.iteye.com/blog/153886

大数据量修改批量提交

例如: 原来的语句为

____DELETE FROM HUGETABLE WHERE condition;

可用如下语句代替:

____BEGIN
________LOOP
____________DELETE FROM HUGETABLE
____________WHERE condition
____________AND ROWNUM<10000;
____________EXIT WHEN SQL%NOTFOUND;
____________COMMIT;
________END LOOP;
____END;
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值