Data Transfer between Business and Data Access Components in Enterprise Applications using .Net

Data Transfer between Business and Data Access Components in Enterprise Applications using .Net

 

Introduction

Often in the development of enterprise applications, business logic and data access logic are defined in separate tiers. This brings the great challenge to the developers to exchange data between these tiers.

 

There are various options the programmers are exposed to. They are

  1. Custom objects
  2. DataSets
  3. DataReaders
  4. XML

 

Let’s explore pros and cons of these approaches below.

 

Data Transfer Techniques

XML is very data source independent. It’s easier to represent relational data in the XML format. There are quiet a few XML technologies support provided in .Net Framework. XPath queries can be used in the code to query the data from the XML document. But this approach poses lot of challenges in terms of parsing the XML document. Business components will contain lot of XML Parsing logic than the Business logic. This becomes maintenance overhead.

 

DataReader are data source dependent, even though it provides fast, forward, and read-only API to read data. Attaching business rules to the columns (ordinals) in the DataReader are not possible. Connection to the data source is kept opened till the reader is closed.

 

DataSet / DataTable provides container for storing data in a tabular format. It’s an in-memory object that provides data retrieval and filter operations. Data Aggregation, Expression languages, Computed columns and built-in support are the greater side of the DataSet. It is not designed only to represent database (relational) data but also to any data source – the file system, external services and in-memory data. It extends support to define hierarchial data. DataSet may contain one or more DataTable. We can create relations for these DataTables at the design time itself.

 

We can create dynamic views of the data stored in the DataSet. DataView helps in creating different sort orders, and filter data by expresssions.

 

Even though it brings lot of good features for quick to market enterprise applications, it comes with its own overheads also. Let’s explore them below.

 

DataSet comes with two flavors.

  • UnTyped
  • Typed

 

Consumers of the Untyped datasets should know about schema at the design time. So it becomes very hard to maintain when there are any changes to the schema in the underlying data source. DataSet (untyped) merely represent data and it does not represent any domain objects or business entities. We need additional classes to manipulate DataSets. Adding business rules methods are not feasible with untyped datasets.

 

Alternatively, Typed datasets provide flexibility defining the columns and the types at the design time using XML Schema (XSD). This gives great performance over untyped as the compiler need not have to do the type check and conversion. Typed DataSet inherits all the members of the DataSet. Since the typed DataSet is generated using XML Schema, it is easier to define friendlier names to the columns. DataSets cannot hold specific business-rules by itself. We can create a partial class with business rules validation to encapsulate the typed datasets in the business layer. Each typed dataset created will generate a partial class for the dataset, a partial nested class for each table in the dataset, and each DataTable will have DataRow Partial Class. This is we create a business aware typed datasets.

 

DataSet can be serialized and sent over the network or persisted in a file system. .Net 1.1 framework introduced XML streaming for seriazling the data in the XML notation. These XML steams had the data padded with schema information. When the size of the dataset increases, the performance was less. .Net 2.0 framework introduced binary streams for the serialization algorithm which provided lot of bandwidth. We can further improve the performance by  excluding the schema.

 

Now let’s see how we can encapsulate data using custom objects.

 

One option is to have both data and business methods encapsulated in a single business object. The commonly made design mistake is to pass the business objects directly into the data access layer so they can be populated. Business object libraries reference the data access layer assembly to make method calls in the data access layer. So we should not make the data access component reference the business component.  This will lead to a circular reference.

 

Another option is to encapsulate data in a separate custom class called Data transfer object (DTO). DTOs are formerly known as value objects, is a design pattern used to transfer data between software application subsystems. DTOs are often used in conjunction with data access objects to retrieve data from a database. The difference between data transfer objects and business objects or data access objects is that a DTO does not have any behavior except for storage and retrieval of its own data (accessors and mutators).  Custom Objects should implement Collection interfaces (IEnumerable) to store multiple values. It should implement ISerializable interface if the object has to be serialized.

DTOs are designed to move data between different layers in the enterprise distributed applications.  They can be created in a separate project/assembly and reference them from Presentation, Business and Data Access layers. We can any of the ORM tools to construct the DTOs.

 

Conclusion

Both Typed DataSet and Custom Entites can be used while building systems and both accomplish same objective. DataSet are best suited for small applications or prototyping. For small application business entities adds complexity. Custom objects may be a best choice for large and complex business applications. It is easy to maintain and improves the readability of the code.

 

转自:http://www.dotnetfunda.com/articles/article603-data-transfer-between-business-and-data-access-components-in-enterprise-app.aspx

基于蒙特卡洛法的规模化电动车有序充放电及负荷预测(Python&Matlab实现)内容概要:本文围绕“基于蒙特卡洛法的规模化电动车有序充放电及负荷预测”展开,结合Python和Matlab编程实现,重点研究大规模电动汽车在电网中的充放电行为建模与负荷预测方法。通过蒙特卡洛模拟技术,对电动车用户的出行规律、充电需求、接入时间与电量消耗等不确定性因素进行统计建模,进而实现有序充放电策略的优化设计与未来负荷曲线的精准预测。文中提供了完整的算法流程与代码实现,涵盖数据采样、概率分布拟合、充电负荷聚合、场景仿真及结果可视化等关键环节,有效支撑电网侧对电动车负荷的科学管理与调度决策。; 适合人群:具备一定电力系统基础知识和编程能力(Python/Matlab),从事新能源、智能电网、交通电气化等相关领域研究的研究生、科研人员及工程技术人员。; 使用场景及目标:①研究大规模电动车接入对配电网负荷特性的影响;②设计有序充电策略以平抑负荷波动;③实现基于概率模拟的短期或长期负荷预测;④为电网规划、储能配置与需求响应提供数据支持和技术方案。; 阅读建议:建议结合文中提供的代码实例,逐步运行并理解蒙特卡洛模拟的实现逻辑,重点关注输入参数的概率分布设定与多场景仿真的聚合方法,同时可扩展加入分时电价、用户行为偏好等实际约束条件以提升模型实用性。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值