Counting Rows in SQL Server Integration Services

SQL Server Integration Services行计数组件应用
本文介绍了在SQL Server Integration Services中使用行计数组件进行行计数的方法。可通过创建整数变量,将行计数组件添加到数据流指定位置并设置变量名来实现。还阐述了行计数的多种用途,如记录包执行统计、审计数据质量等,同时指出其适用场景和局限性。

Counting Rows in SQL Server Integration Services

I am often asked if there is a way to capture row counts in SSIS. For example, a user may want to know how many rows passed along certain outputs of a conditional split in order to compare the ratio of say high- and low-value line items in a day's sales.

 

Integration Services provides a great way to do this, using the Row Count component. To use this component, first create a variable of integer type (the default, so that's easy) at a scope where you can see it from your Data Flow task.

 

Now add your Row Count component to the Data Flow at the point in the process where you would like to count rows. Edit the Row Count component and set its VariableName property to the name of the variable you created.

 

When you execute the Data Flow, the number of rows which pass through the Row Count component are written to the named variable. However, it's important to note that the variable value does not change until the Data Flow has completed. This is the same for all SSIS package variables referenced in the Data Flow, even when using the Script component, the values are locked when execution of the Data Flow starts and they are only updated at the end. (VB.NET Variables within the Script Component can be changed during the flow, but they cannot be used outside the script.)

 

So what can you do with this row count?

 

One use I like, is to populate a table which captures detailed package execution statistics. I create a number of variables, such as VarErrorRows, VarGoodRows. I use an ExecuteSQL task to write to a table with a statement such as:

 

INSERT INTO [My_Audit_Table]

( [Package name]

,[Machine name]

,[Username]

,[ErrorRows]

,[GoodRows]

,[Execution start time]

,[AcceptancePercent])

VALUES (?,?,?,?,?,?,?)

 

And I map the variables appropriately. Note that I can use system variables to map Package name, Machine name, User name and Execution start time.

 

Another interesting use is to audit a sample of the data in a process before loading the entire data set. For example, let's say I have an extract which is pretty expensive - an involved query, or pulling a flat file over a slow network connection from a remote log server. The data may be ok, but on the other hand it may have various quality issues that would prevent me from loading it to my warehouse. But it's expensive data to get to, so I want to audit its quality and load it if possible in a single operation.

 

I work this scenario using two Data Flows - the first to extract and audit, and the second to load.

 

In this case, I add a multicast immediately after the source adapter. One leg of the multicast goes straight to a raw file destination - so I capture the source to my integration server. Another leg of the multicast goes through a Row or Percent sample component to sample say 10% of the load. I immediately count the sampled rows into a variable, SampleSize. Next I apply whatever auditing logic I like - conditionally splitting out null keys, columns which have missing values, etc. At the end of my auditing I can add another Row Count component to capture GoodRows which passed the audit.

 

The second Data Flow is my real business logic to load the warehouse, but this flow sources from the Raw File rather than the original source - no need to stress that slow connection again, or to run that query twice.

 

The important thing is to add an expression to the precedence constraint between the two Data Flows. The second Data Flow only runs if the first succeeds and the expression GoodRows / SampleSize >= x evaluates True. X can be whatever value you like, or could even be another variable, configured from an XML file if you like.

 

And of course you can capture all these metrics into an audit table just as before.

 

However, the Row Count is not the answer to all row counting scenarios. One common question is how many rows were inserted to a destination? You may be tempted to count these with a RowCount component just before the destination component. This may work for a flat file, but not reliably for a database - because it would not take into account rows which failed to be inserted. So there are two patterns you could use for OLEDB destinations:

o        Use an ExecuteSQL task to count rows before and after the Data Flow has executed, and compare;

o        Add a RowCount component before the destination, and one on the error output of the destination, and compare the values of the two variables after the Data Flow has completed.

 

But let's not end on a negative. I love the Row Count component, so here's another neat use. The Row Count component can be used without an output. In other words, it can, in effect, be a destination. This is as cool as a very cool thing indeed, because it means you can run a Data Flow and debug it without the data going anywhere. Think of it - no more temp tables, or dummy text files, just to get your process working while you debug. Develop first using a Row Count destination and then, when you're happy with the process, hook up a real destination and you're ready to go - after some final testing, of course.

 

I hope this gives some insight into this elegantly simple, often overlooked, but very valuable little component.

 

转载于:https://www.cnblogs.com/net2004/archive/2005/04/13/136668.html

标题SpringBoot智能在线预约挂号系统研究AI更换标题第1章引言介绍智能在线预约挂号系统的研究背景、意义、国内外研究现状及论文创新点。1.1研究背景与意义阐述智能在线预约挂号系统对提升医疗服务效率的重要性。1.2国内外研究现状分析国内外智能在线预约挂号系统的研究与应用情况。1.3研究方法及创新点概述本文采用的技术路线、研究方法及主要创新点。第2章相关理论总结智能在线预约挂号系统相关理论,包括系统架构、开发技术等。2.1系统架构设计理论介绍系统架构设计的基本原则和常用方法。2.2SpringBoot开发框架理论阐述SpringBoot框架的特点、优势及其在系统开发中的应用。2.3数据库设计与管理理论介绍数据库设计原则、数据模型及数据库管理系统。2.4网络安全与数据保护理论讨论网络安全威胁、数据保护技术及其在系统中的应用。第3章SpringBoot智能在线预约挂号系统设计详细介绍系统的设计方案,包括功能模块划分、数据库设计等。3.1系统功能模块设计划分系统功能模块,如用户管理、挂号管理、医生排班等。3.2数据库设计与实现设计数据库表结构,确定字段类型、主键及外键关系。3.3用户界面设计设计用户友好的界面,提升用户体验。3.4系统安全设计阐述系统安全策略,包括用户认证、数据加密等。第4章系统实现与测试介绍系统的实现过程,包括编码、测试及优化等。4.1系统编码实现采用SpringBoot框架进行系统编码实现。4.2系统测试方法介绍系统测试的方法、步骤及测试用例设计。4.3系统性能测试与分析对系统进行性能测试,分析测试结果并提出优化建议。4.4系统优化与改进根据测试结果对系统进行优化和改进,提升系统性能。第5章研究结果呈现系统实现后的效果,包括功能实现、性能提升等。5.1系统功能实现效果展示系统各功能模块的实现效果,如挂号成功界面等。5.2系统性能提升效果对比优化前后的系统性能
在金融行业中,对信用风险的判断是核心环节之一,其结果对机构的信贷政策和风险控制策略有直接影响。本文将围绕如何借助机器学习方法,尤其是Sklearn工具包,建立用于判断信用状况的预测系统。文中将涵盖逻辑回归、支持向量机等常见方法,并通过实际操作流程进行说明。 一、机器学习基本概念 机器学习属于人工智能的子领域,其基本理念是通过数据自动学习规律,而非依赖人工设定规则。在信贷分析中,该技术可用于挖掘历史数据中的潜在规律,进而对未来的信用表现进行预测。 二、Sklearn工具包概述 Sklearn(Scikit-learn)是Python语言中广泛使用的机器学习模块,提供多种数据处理和建模功能。它简化了数据清洗、特征提取、模型构建、验证与优化等流程,是数据科学项目中的常用工具。 三、逻辑回归模型 逻辑回归是一种常用于分类任务的线性模型,特别适用于二类问题。在信用评估中,该模型可用于判断借款人是否可能违约。其通过逻辑函数将输出映射为0到1之间的概率值,从而表示违约的可能性。 四、支持向量机模型 支持向量机是一种用于监督学习的算法,适用于数据维度高、样本量小的情况。在信用分析中,该方法能够通过寻找最佳分割面,区分违约与非违约客户。通过选用不同核函数,可应对复杂的非线性关系,提升预测精度。 五、数据预处理步骤 在建模前,需对原始数据进行清理与转换,包括处理缺失值、识别异常点、标准化数值、筛选有效特征等。对于信用评分,常见的输入变量包括收入水平、负债比例、信用历史记录、职业稳定性等。预处理有助于减少噪声干扰,增强模型的适应性。 六、模型构建与验证 借助Sklearn,可以将数据集划分为训练集和测试集,并通过交叉验证调整参数以提升模型性能。常用评估指标包括准确率、召回率、F1值以及AUC-ROC曲线。在处理不平衡数据时,更应关注模型的召回率与特异性。 七、集成学习方法 为提升模型预测能力,可采用集成策略,如结合多个模型的预测结果。这有助于降低单一模型的偏差与方差,增强整体预测的稳定性与准确性。 综上,基于机器学习的信用评估系统可通过Sklearn中的多种算法,结合合理的数据处理与模型优化,实现对借款人信用状况的精准判断。在实际应用中,需持续调整模型以适应市场变化,保障预测结果的长期有效性。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值