FIRST_ROWS_n (zt)

本文探讨了Oracle9i中引入的CBO(Cost-Based Optimizer)模式,特别是FIRST_ROWS_n优化器指令。文章通过实例展示了如何设置FIRST_ROWS_n,并比较了其与传统ALL_ROWS模式的不同之处。
在Oracle9i之前,仅有的两个CBO模式是ALL_ROWS以及FIRST_ROWS。传统的FIRST_ROWS SQL最优化的缺点之一是,它的运算法则并没有特别指定行检索的范围。

但是,在Oracle9i中包含了几个新的最优化指令:

FIRST_ROWS_1
FIRST_ROWS_10
FIRST_ROWS_100
FIRST_ROWS_1000
FIRST_ROWS_n最优化会指示选择一个查询执行计划,这个计划会缩短生成最初n行查询结果的时间。 [@more@]

你可以把这个新的CBO模式设置到数据库中的几个层次上:systemwide,在会话层或者在查询层次上。


alter system set optimizer_mode=first_rows_100; alter session set optimizer_mode = first_rows_100; select /*+ first_rows(100) */ from student;


根据来自Oracle公司的说法,使用FIRST_ROWS_n最优化,Oracle查询能够使用最少的反应时间来给出最初的n行结果。更快速的给出最初n行的结果能够提高用户对应用软件的满意程度的原因是由于用户能够更为快速的得到最初的那些数据。

当使用FIRST_ROWS最优化索引的时候,ALL_ROWS最优化支持完整表的搜寻。但是,Oracle通过FIRST_ROWS_n最优化扩展了这个概念的范畴。

在传统的FIRST_ROWS最优化中,Oracle CBO支持索引扫描,甚至当全部成本高于完整表扫描的时候也是如此。在对于完整表扫描不太昂贵的较小型表的情况下,这种情况也是尤为明显。

请看一看下面的这个例子。


Set autotrace on explain alter session set optimizer_goal = choose; select * from emp where sal < 1200; PLAN ----------------------------------------------------- SELECT STATEMENT (OPTIMIZER=CHOOSE) (COST=62) (ROWS=99) TABLE ACCESS FULL EMP (COST=62) (ROWS=99)


现在,我们要使用FIRST_ROWS最优化来进行相同的查询工作。


alter session set optimizer_goal = first_rows; select * from emp where sal < 1200; The explain plan is now transformed to: PLAN ----------------------------------------------------- SELECT STATEMENT (OPTIMIZER=FIRST_ROWS) (COST=102) TABLE ACCESS BY INDEX ROWID EMP (COST=102) (ROWS=99) INDEX RANGE SCAN SA L_IDX (COST=2) (ROWS=99)


我们希望CBO能够对索引进行支持,但是我们还是非常惊奇的看到选择了一种比完整表扫描更为昂贵的方式。这是一个临界点。在Oracle9i之前,FIRST_ROWS最优化是一种对内部规则和费用的一种综合,而且Oracle9i FIRST_ROWS最优化也是完全基于成本的。

在Oracle9i之前,我们使用OPTIMIZER_INDEX_COST_ADJ参数来控制CBO选择索引。

虽然Oracle公司声称FIRST_ROWS_n最优化能够让查询变得更加快速,但是要记住, Oracle9i CBO所负责的是最初那些行的查询访问的成本。换一种说法,所有的FIRST_ROWS_n模式所做的就是决定出更为明智的选择,决定是使用索引还是使用完整表扫描来进行对小型表的访问。由于多数的Oracle9i DBA会把这些小型表存储于KEEP池中,因此该参数使用的范围并不广。

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/35489/viewspace-997010/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/35489/viewspace-997010/

# -*- coding: utf-8 -*- """ @file: async_zt_flow_hcproblem @author: zhangxiukun @date: 2025/7/17 10:14 """ import asyncio from datetime import date from app.models.project import ProjectSoftware from app.settings import TORTOISE_ORM from app.utils.db import MySqlClient async def process_and_insert_batch(rows): """ 异步处理单批次数据并插入数据库 """ processed_data = [] for row in rows: # 创建基础记录 new_row = { #'zt_id': row['id'], 'project_no': row['projectno'], 'kind': row['softwarecategory'], #'software_item_umber': row['softwareitemnumber'], 'version': row['baselinename'], 'iap_lib': row["IAP"] # 'work_order_no': row['workorderno'], # 'upgrade_status': row['upgradestatus'], # 'upgrade_date': row['upgradedate'] if isinstance(row['upgradedate'], date) else None, # 'upgrader': row['upgrader'], # 'first_licensed_date': row['firstlicenseddate'] if isinstance(row['firstlicenseddate'], date) else None } # 生成lowerconfig记录 if row['configname']: config_record = new_row.copy() config_record['kind'] += 'config' config_record['version'] = row['configname'] config_record['iap_lib'] = None # config_record['software_item_umber'] = row['confignumber'] processed_data.append(config_record) processed_data.append(new_row) # 使用ORM批量插入 try: await ProjectSoftware.bulk_create( [ProjectSoftware(**item) for item in processed_data], ignore_conflicts=True ) return len(processed_data) except Exception as e: print(f"批量插入错误: {e}") return 0 async def process_data_concurrently(batch_size=1000): """ 使用纯异步并发处理数据(避免多线程问题) """ total_inserted = 0 sd_sql = ("SELECT sd.projectno, sd.softwarecategory, sd.softwareitemnumber, sd.configname,sd.baselinename,ma.IAP FROM zt_standingbookdetail sd LEFT JOIN zt_materiallib ma ON ma.softwareitemnumber = sd.softwareitemnumber ") #sd_sql = "SELECT softwareitemnumber, baselinename, IAP FROM zt_standingbookdetail " async with MySqlClient(TORTOISE_ORM) as client: sd_rows = await client.execute_query(sd_sql) # 分割批次 batches = [sd_rows[i:i + batch_size] for i in range(0, len(sd_rows), batch_size)] # 并发处理 tasks = [process_and_insert_batch(batch) for batch in batches] results = await asyncio.gather(*tasks) total_inserted = sum(results) print(f"成功插入 {total_inserted} 条记录到 ProjectSoftware 表中") if __name__ == '__main__': asyncio.run(process_data_concurrently()) 优化这个代码
最新发布
08-23
# -*- coding: utf-8 -*- """ @file: async_zt_flow_hcproblem @author: zhangxiukun @date: 2025/7/17 10:14 """ from tortoise import run_async from datetime import date from app.models.zentao import ZtSdBookDetailRf from app.settings import TORTOISE_ORM from app.utils.db import MySqlClient async def get_processed_standingbookdetail(): """ 读取zt_standingbookdetail表数据并生成转换记录 """ processed_data = [] sql = "SELECT * FROM zt_standingbookdetail" async with MySqlClient(TORTOISE_ORM) as client: rows = await client.execute_query(sql) for row in rows: # 创建更新后的记录 new_row = { 'zt_id': row['id'], # 保存原ID到zt_id字段 'project_no': row['projectno'], 'software_category': 'upperconfig', # 固定类别 'software_item_umber': row['softwareitemnumber'], 'baseline_name': row['baselinename'], 'work_order_no': row['workorderno'], 'upgrade_status': row['upgradestatus'], 'upgrade_date': row['upgradedate'] if isinstance(row['upgradedate'], date) else None, 'upgrader': row['upgrader'], 'first_licensed_date': row['firstlicenseddate'] if isinstance(row['firstlicenseddate'], date) else None } # 生成lowerconfig记录 config_record={} if row['configname']: config_record = new_row.copy() config_record['software_category'] =new_row['software_category']+'config' config_record['baseline_name']=row['configname'] config_record['software_item_umber']=row['confignumber'] processed_data.append(new_row) processed_data.append(config_record) print(processed_data) return processed_data async def insert_zt_standingbookdetail(): """ 将转换后的数据插入到新表 """ data = await get_processed_standingbookdetail() # 使用 Tortoise ORM 批量创建 batch_size = 1000 total_inserted = 0 for i in range(0, len(data), batch_size): batch = data[i:i + batch_size] try: await ZtSdBookDetailRf.bulk_create( [ZtSdBookDetailRf(**item) for item in batch], ignore_conflicts=True # 忽略冲突(如果存在唯一约束) ) total_inserted += len(batch) except Exception as e: print(f"插入数据时发生错误: {e}") raise e print(f"成功插入 {total_inserted} 条记录到 zt_sdbookdetailrf 表中。") if __name__ == '__main__': run_async(insert_zt_standingbookdetail())
08-22
# -*- coding: utf-8 -*- """ @file: async_zt_flow_hcproblem @author: zhangxiukun @date: 2025/7/17 10:14 """ from tortoise import run_async from datetime import date from app.models.zentao import ZtSdBookDetailRf from app.settings import TORTOISE_ORM from app.utils.db import MySqlClient async def get_processed_standingbookdetail(): """ 读取zt_standingbookdetail表数据并生成转换记录 """ processed_data = [] config_record = {} sql = "SELECT * FROM zt_standingbookdetail" async with MySqlClient(TORTOISE_ORM) as client: rows = await client.execute_query(sql) for row in rows: # 创建更新后的记录 new_row = { 'zt_id': row['id'], # 保存原ID到zt_id字段 'project_no': row['projectno'], 'software_category': row['softwarecategory'], # 固定类别 'software_item_umber': row['softwareitemnumber'], 'baseline_name': row['baselinename'], 'work_order_no': row['workorderno'], 'upgrade_status': row['upgradestatus'], 'upgrade_date': row['upgradedate'] if isinstance(row['upgradedate'], date) else None, 'upgrader': row['upgrader'], 'first_licensed_date': row['firstlicenseddate'] if isinstance(row['firstlicenseddate'], date) else None } print(new_row['software_category']) # 生成lowerconfig记录 if row['configname'] != '': config_record = new_row config_record['software_category'] =new_row['software_category']+'config' config_record['baseline_name']=row['configname'] config_record['software_item_umber']=row['confignumber'] processed_data.append(new_row) if config_record: processed_data.append(config_record) print(processed_data) return processed_data async def insert_zt_standingbookdetail(): """ 将转换后的数据插入到新表 """ data = await get_processed_standingbookdetail() # 使用 Tortoise ORM 批量创建 batch_size = 1000 total_inserted = 0 for i in range(0, len(data), batch_size): batch = data[i:i + batch_size] try: await ZtSdBookDetailRf.bulk_create( [ZtSdBookDetailRf(**item) for item in batch], ignore_conflicts=True # 忽略冲突(如果存在唯一约束) ) total_inserted += len(batch) except Exception as e: print(f"插入数据时发生错误: {e}") raise e print(f"成功插入 {total_inserted} 条记录到 zt_sdbookdetailrf 表中。") if __name__ == '__main__': run_async(insert_zt_standingbookdetail()) 转换完1000条就开始插入,使用多线程
08-22
class ZtSdBookDetailRf(BaseModel, TimestampMixin): id = fields.IntField(pk=True, description="主键ID") zt_id = fields.IntField(pk=False, description='禅道主键') project_no = fields.CharField(max_length=50, description="项目号") software_category = fields.CharField(max_length=50, default='', description="软件类别") software_item_umber = fields.CharField(max_length=50, null=True, description="软件物料号") baseline_name = fields.CharField(max_length=255, default='', description="软件物料名称") work_order_no = fields.CharField(max_length=50, description="工单号") upgrade_status = fields.CharField(max_length=50, null=True, description="升级状态") upgrade_date = fields.DateField(null=True, description="升级完成时间") upgrader = fields.CharField(max_length=30, null=True, description="升级人") first_licensed_date = fields.DateField(null=True, description="首次授权到期日") class Meta: table = "zt_sdbookdetailrf" table_description = "台账明细表" default_connection = "default" 这是model的定义,是新表的结构,修改下面的代码 # -*- coding: utf-8 -*- """ @file: async_zt_flow_hcproblem @author: zhangxiukun @date: 2025/7/17 10:14 """ from tortoise import run_async from app.models.zentao import ZtSdBookDetailRf from app.settings import TORTOISE_ORM from app.utils.db import MySqlClient async def get_processed_standingbookdetail(): """ 读取zt_standingbookdetail表数据并生成转换记录 """ processed_data = [] sql = "SELECT * FROM zt_standingbookdetail" async with MySqlClient(TORTOISE_ORM) as client: rows = await client.execute_query(sql) for row in rows: # 创建更新后的记录 new_row = dict(row) new_row['configname'] = new_row['baselinename'] # 生成upperconfig记录 upper_record = new_row.copy() upper_record['softwarecategory'] = 'upperconfig' # 生成lowerconfig记录 lower_record = new_row.copy() lower_record['softwarecategory'] = 'lowerconfig' processed_data.append(upper_record) processed_data.append(lower_record) return processed_data async def insert_processed_data(): """ 将转换后的数据插入到新表 """ # 获取处理后的数据 data = await get_processed_standingbookdetail() # 插入SQL语句 insert_sql = """ INSERT INTO zt_standingbookdetail_new ( projectno, softwarecategory, softwareitemnumber, baselinename, confignumber, configname, workorderno, upgradestatus, upgradedate, upgrader, pcbox, card, cpu, memory, disk, firstlicenseddate ) VALUES ( %(projectno)s, %(softwarecategory)s, %(softwareitemnumber)s, %(baselinename)s, %(confignumber)s, %(configname)s, %(workorderno)s, %(upgradestatus)s, %(upgradedate)s, %(upgrader)s, %(pcbox)s, %(card)s, %(cpu)s, %(memory)s, %(disk)s, %(firstlicenseddate)s ) """ async with MySqlClient(TORTOISE_ORM) as client: # 分批次插入(每1000条一批) batch_size = 1000 total_inserted = 0 for i in range(0, len(data), batch_size): batch = data[i:i + batch_size] # 使用事务保证数据一致性 try: await client.start_transaction() for record in batch: await client.execute_query(insert_sql, record) await client.commit_transaction() total_inserted += len(batch) except Exception as e: await client.rollback_transaction() raise e print(f"成功插入 {total_inserted} 条记录到 zt_standingbookdetail_new 表中。") if __name__ == "__main__": run_async(insert_processed_data()) if __name__ == '__main__': run_async(insert_zt_standingbookdetail())
08-22
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值