SAP BW – Implementing Delta Updates in the Financial Domain

本文探讨了SAP BW中使用Delta更新时遇到的各种挑战,包括数据流架构设置、历史数据加载过程中的复杂性和时间消耗问题,以及在生产环境中可能遇到的数据质量问题。文章还提供了逐步加载历史数据的有效策略。
 
Challenges with delta updates 
Delta updates in SAP BW are used when we have to update our data targets with 
recently changed information. These could be newly created documents or old 
documents recently modified. The way in which this mechanism works in SAP BW is 
different from extractor to extractor, however, there are some similarities. If we take a 
financial line item extractor (datasource 0FI_GL_4), the delta process takes all newly 
created documents based on a document timestamp and, in addition to that, all 
documents modified since the last extract. 
 
There are a number of challenges in implementing this process and moving it into 
production. First of all, the data flow architecture has to be setup like this: Source-> 
ODS object -> Data target (Figure A). The ODS object identifies the changes made for 
individual characteristics and key figures within a delta data record. Other data 
destinations (InfoCubes) can be provided with data from this ODS object. If there are 
corrections to the documents in the source system, it is tracked in the ODS object and a 
reversal entry is created for the InfoCube automatically. 
 
Figure A 
 
This setup has certain advantages and drawbacks. For example, it is not easy to reload 
documents for a certain period of time or a specific range of documents. Depending on the 
way how the delta process is initialized you may be able to simplify the procedure of data 
refresh if it is needed at a later stage. 
 
Another challenge in this process is historical data load. If you have to load several years 
of transactional data the system won’t let you using Full updates year by year and then 
switching back to delta. On the other hand, if you try initializing delta process for the 
whole history your database engine may not be able to handle days of uninterrupted 
processing time. 
 
So what is the solution? Well, there is a way to initialize delta process in such a way that 
would allow transferring historical data step by step, and make smart updates to the 
system later on.  
Which data is picked up by Delta 
Delta extraction enables you to load into the BW system only that data which has been 
added or has changed since the last extraction event. Data that is already extracted and 
that is not changed is kept. This data does not need to be deleted before a new upload. 
 
There are two streams of data picked up by the Delta process via the datasource 
0FI_GL_4: 
 
1) All documents created in the source system with a timestamp later than documents 
picked up by the last delta. A time stamp on the line items serves to identify the status of 
delta data. Time stamp intervals that have already been read are then stored in a time 
stamp table (BWOM2_TIMEST). 
 
2) All changed document line items since the last data request in the SAP R/3 system. All 
line items that are changed in a way relevant for BW are logged in the source system in 
the delta queue table (BWFI_AEDAT). 
 
Stumbling points when moving to Production 
As soon as all objects are transported to production we have to start historical data load. 
This process may be complex and time consuming. At the same time when loading 
production data you may encounter problems you never faced in the Q&A environment. 
For example, production transactions may contain disallowed characters for some 
characteristics, or have lower case characters in the cases when they are not permitted. 
This is usually discovered during historical data loads, and therefore requires corrections 
made in the development environment, transporting changes all the way into production 
and finally reloading data. 
 
On the other hand, dealing with huge data volumes creates a problem related to database 
capacity, server processing power and in some cases disk space issues. In practice, it is 
extremely important to load data in reasonable portions, which allows monitoring and 
control over the database. 
 
How to deal with historical volumes 
In the case historical volumes take millions of records it is important to find a way on how 
to split up the loads into portions and upload data step by step. In the scenario of delta 
loads to ODS and later to the InfoCube you have to use initial loads in order to apply delta 
extractors. 
 
First step in this process would be to analyze historical data, and identify data objects that 
would allow you to split up the loads into reasonable portions. These objects could be 
Company code ranges, cost center ranges, etc. It is important that these ranges do not 
change over time. Time periods can be used here as well. However, if you discover that 
documents for a certain company code have to be reloaded at a later stage, you would 
need to refresh the whole history for the cube. If on the contrary you make your initial 
loads by company code, for example, it gives you an extra flexibility to refresh history for 
a certain company code only. 
 
Second step involves running initial loads step by step for each company code/ time 
period range. For example, we start with company codes 01-10 for year 2000 and 
continue year by year until the last year. For the continuous delta update we have to run 
an initial load for a time period range from current year until 31/12/9999 for example. 
Then we proceed to the next company range, say 11-20, and finally 21-99 (Figure B). 
 
This approach ensures reasonable loads of historical data and leaves you a flexibility for 

data refresh at a later stage. 

Job scheduling 
Delta update process can be fully automated. This means no manual involvement is 
required in the daily data updates unless there are system problems or breakdowns. 
 
A daily update process consists of three major phases: 
1) Master data updates, e.g. customer, vendor, GL account data; 
2) Running a daily transactional extractor from the source system to the ODS; 
3) Further upload from ODS to InfoCube - at this stage additional transaction lines may 
be generated depending on the number of reversal entries required in the cube. 
 
This standard job schedule process may be preceded or followed by other relevant jobs, 
depending on the system design and whatever other extractions are required by the 
ultimate solution.

内容概要:本文详细介绍了一种基于Simulink的表贴式永磁同步电机(SPMSM)有限控制集模型预测电流控制(FCS-MPCC)仿真系统。通过构建PMSM数学模型、坐标变换、MPC控制器、SVPWM调制等模块,实现了对电机定子电流的高精度跟踪控制,具备快速动态响应和低稳态误差的特点。文中提供了完整的仿真建模步骤、关键参数设置、核心MATLAB函数代码及仿真结果分析,涵盖转速、电流、转矩和三相电流波形,验证了MPC控制策略在动态性能、稳态精度和抗负载扰动方面的优越性,并提出了参数自整定、加权代价函数、模型预测转矩控制和弱磁扩速等优化方向。; 适合人群:自动化、电气工程及其相关专业本科生、研究生,以及从事电机控制算法研究与仿真的工程技术人员;具备一定的电机原理、自动控制理论和Simulink仿真基础者更佳; 使用场景及目标:①用于永磁同步电机模型预测控制的教学演示、课程设计或毕业设计项目;②作为电机先进控制算法(如MPC、MPTC)的仿真验证平台;③支撑科研中对控制性能优化(如动态响应、抗干扰能力)的研究需求; 阅读建议:建议读者结合Simulink环境动手搭建模型,深入理解各模块间的信号流向与控制逻辑,重点掌握预测模型构建、代价函数设计与开关状态选择机制,并可通过修改电机参数或控制策略进行拓展实验,以增强实践与创新能力。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值