- Connection conn = getConnection();
- PreparedStatement ps = null;
- try {
- conn.setAutoCommit(false);
- ps = conn.prepareStatement("insert into t_user(username, password) values(?,?)");
- Random r = new Random();
- for(int i = 0; i < 1000; i++){
- ps.setString(1, String.valueOf(r.nextInt(Integer.MAX_VALUE)));
- ps.setString(2, String.valueOf(r.nextInt()));
- ps.executeUpdate();
- [color=red]conn.commit(); //每次都提交[/color]
- }
- } catch (SQLException e) {
- conn.rollback();
- throw e;
- }finally{
- ps.close();
- conn.close();
- }
循环每次都开启一个事务 插入一千条数据 耗时 29078 ms
- Connection conn = getConnection();
- PreparedStatement ps = null;
- try {
- conn.setAutoCommit(false);
- ps = conn.prepareStatement("insert into t_user(username, password) values(?,?)");
- Random r = new Random();
- for(int i = 0; i < 1000; i++){
- ps.setString(1, String.valueOf(r.nextInt(Integer.MAX_VALUE)));
- ps.setString(2, String.valueOf(r.nextInt()));
- ps.executeUpdate();
- }
- conn.commit(); //只提交一次
- } catch (SQLException e) {
- conn.rollback();
- throw e;
- }finally{
- ps.close();
- conn.close();
- }
只开启一次事务 耗时 703 ms
但这不是推荐用法,jdbc有一个批处理方法 专门处理这种批量操作的 ps.addBatch();
- Connection conn = getConnection();
- PreparedStatement ps = null;
- try {
- conn.setAutoCommit(false);
- ps = conn.prepareStatement("insert into t_user(username, password) values(?,?)");
- Random r = new Random();
- for(int i = 0; i < 1000; i++){
- ps.setString(1, String.valueOf(r.nextInt(Integer.MAX_VALUE)));
- ps.setString(2, String.valueOf(r.nextInt()));
- ps.addBatch();//加入批处理
- //ps.executeUpdate();不能与addBatch()同时用
- }
- int[] results = ps.executeBatch();//执行批处理
- conn.commit();
- } catch (SQLException e) {
- conn.rollback();
- throw e;
- }finally{
- ps.close();
- conn.close();
- }
批处理方式 耗时 781 ms
跟只开启一次事务效率差不多
这里要着重指出addBatch()和executeUpdate()不要同时用否则会导致记录插入两次
本来1000条记录 会插入2000条,本人在开始就翻了这个错误
第一种方法与第二 三种方法性能差别达 40倍 看来事务真是性能杀手啊 能少开尽量少开啦
http://xuliangyong.iteye.com/blog/237728