关于repair full request .

本文详细介绍了SAP BW中Repair Full Request的功能及使用方法。针对缺失或损坏的数据记录,通过特定流程实现数据修复,避免完全重置所带来的不便。文章还对比了Full Update与Delta Update的区别。
 

在多个地方看到过关于repair full request的面试问题,之前因为BW全程没学完一直搞不懂是什么。今早再次看到,所以一上午都在SDN上查看相关信息。算是摸了个大概出来。

先上NOTE。(原来SAP OSS NOTE是需要有授权的人才能看得到的,幸好SDN上有人把它贴了出来。)

 

OSS Note 739863 'Repairing data in BW'

 

"Symptom Some data is incorrect or missing in the PSA table or in the ODS object (Enterprise Data Warehouse layer). There may be a number of reasons for this problem: Errors in the relevant application, errors in the user exit, errors in the Delta Queue, handling errors in the customers posting procedure (for example, a change in t he extract structure during production operation if the Delta Queue was not yet empty; postings before the Delta Init was completed, and so on), extractor errors, unplanned system terminations in BW and in R/3, and so on.

 

Solution

 

Read this note in full BEFORE you start actions that may repair your data in BW. Contact SAP Support for help with troubleshooting before you start to repair data.

BW offers you the option of a full upload in the form of a repair request (as of BW 3.0B).

If you want to use this function, we recommend that you use the ODS object layer.

Note that you should only use this procedure if you have a small number of incorrect or missing records.

Otherwise, we always recommend a reinitialization (possibly after a previous selective deletion, followed by a restriction of the Delta-Init selection to exclude areas that were not changed in the meantime).

 

1. Repair request: Definition if you flag a request as a repair request with full update as the update mode, it can be updated to all data targets, even if these already contain data from delta initialization runs for this Data Source/source system combination. This means that a repair request can be updated into all ODS objects at any time without a check being performed. The system supports loading by repair request into an ODS object without a check being performed for overlapping data or for the sequence of the requests. This action may therefore result in duplicate data and must thus be prepared very carefully. The repair request (of the "Full Upload" type) can be loaded into the same ODS object in which the 'normal' delta requests run. You will find this request under the "Repair Request" option in the Info Package (Maintenance) menu.

 

2. Prerequisites for using the "Repair Request" function

 

2.1. Troubleshooting Before you start the repair action, you should carry out a thorough analysis of the possible cause of the error to make sure that the error cannot recur when you execute the repair action. For example, if a key figure has already been updated incorrectly in the OLTP system, it will not change after a reload into BW. Use transaction RSA3 (Extractor Checker) in the source system for help with troubleshooting. Another possible source of the problem may be your user exit. To ensure that the user exit is correct, first load a user exit with a Probe-Full request into the PSA table and check whether the data is correct. If it is not correct: Search for the error in the exit user. If you do not find it, we recommend that you deactivate the user exit for testing purposes and request a new Full Upload. It If the data arrives correctly, it is highly probable that the error is indeed in the user exit.

We always recommend that you load the data into the PSA table in the first step and check the result there.

 

2.2. Analyze the effects on the downstream targets Before you start the Repair request into the ODS object, make sure that the incorrect data records are selectively deleted from the ODS object. However, before you decide on selective deletion, you should read the Info Help for the "Selective Deletion" function, which you can access by pressing the extra button on the relevant dialog box. The activation queue and the ChangeLog remain unchanged during the selective deletion of the data from the ODS object, which means that the incorrect data is still in the change log afterwards. After the selective deletion, you therefore must not reconstruct the ODS object if it is reconstructed from the ChangeLog. (Reconstruction is usually from the PSA table but, if the data source is the ODS object itself, the ODS object is reconstructed from its ChangeLog). You MUST read the recommendations and warnings about this (press the "Info" button). You MUST also take into account the fact that the delta for the downstream data targets is created from the changelog. If you perform selective deletion and then reload data into the deleted area, this may result in data inconsistencies in the downstream data targets. If you only use MOVE and do not use ADD for updates in the ODS object, selective deletion may not be required in some cases (for example, if incorrect records only have to be changed, rather than deleted). In this case, the DataMart delta also remains intact.

 

2.3. Analysis of the selections You must be very precise when you perform selective deletion: Some applications do not provide the option of selecting individual documents for the load process. Therefore, you must first ensure that you can load the same range of documents into BW as you would delete from the ODS object. This note provides some application-specific recommendations to help you "repair" the incorrect data records. If you updated the data from the ODS object into the InfoCube, you can also delete it there using the "Selective deletion" function. However, if it is compressed at document level there and deletion is no longer possible, you must delete the InfoCube content and fill the data in the ODS object again after repair. You can only perform this action after a thorough analysis of all effects of selective data deletion. We naturally recommend that you test this first in the test system. The procedure generally applies for all SAP applications/extractors. The application determines the selections. For example, if you cannot use the document number for selection but you can select documents for an entire period, then you are forced to delete and then update documents for the entire period in the data target. Therefore, it is important to look first at the selections in the InfoPackage exactly before you delete data from the data target. Some applications have additional special features: Logistics cockpit: As preparation for the repair request, delete the SetUp table (if you have not already done so) and fill it selectively with concrete document numbers (or other possible groups of documents determined by the selection). Execute the Repair request. Caution: You can currently use the transactions that fill SetUp tables with reconstruction data to select individual documents or entire ranges of documents (at present, it is not possible to select several individual documents if they are not numbered in sequence). FI: The Repair request for the Full Upload is not required here. The following efficient alternatives are provided: In the FI area, you can select documents that must be reloaded into BW again, make a small change to them (for example, insert a period into the assignment text) and save them -> as a result, the document is placed in the delta queue again and the previously loaded document under the same number in the BW ODS object is overwritten. FI also has an option for sending the documents selectively from the OLTP system to the BW system using correction programs (see note 616331).

 

3. Repair request execution How do you proceed if you want to load a repair request into the data target? Go to the maintenance screen of the InfoPackage (Scheduler), set the type of data upload to "Full", and select the "Scheduler" option in the menu -> Full Request Repair -> Flag request as repair request -> Confirm. Update the data into the PSA and then check that it is correct. If the data is correct, continue to update into the data targets."

 

经过几个小时的了解,现在回头来想,确实repair full request很重要,实际项目中也应该经常用到。因为虽然initial delta做好了,delta update也做好了,processing chain也做好了,并不是表示从此以后系统就规规矩矩地按常理来出牌了。它时不时还是会跳出个missing record,或者corrupt record给你,让你伤伤脑筋。

对于已经走了很长时间的delta update,BW target infoprovider已经存储了相当大的数量了,如果突然发现有missing record或者corrupt record是件很头痛的事,不可能全部删掉重新做,RSA7 delta repetition只能repete上一次的delta,假如是很久以前的delta中间的record是没有办法重新上载的。

于是,repair full request就出现了。

repair full request是在发现有record missing 或corrupt后,(如果数据是直接传往CUBE, 则需先delete CUBE中的受影响的record,不然数据再次传上来会duplicate;如果是传往DSO,这个不需要delete了,因为DSO有覆盖功能。)新建一个infopackage,选择full update模式,然后scheduler-->repair full request,并且在package的extract中设置好要重新抽取的data record period,然后就可以start infopackage了。后续动作DTP,transformation照做。

repair full request的最大好处就是它的执行并不会影响到initial delta update和delta update的执行。或者说,两者是平行的。

比如说,发现DSO中有一条record错误,这时就可以新建package,full update,repair full request, extract里面只选投这一条数据,重新抽取一下就行了。

当然,可以先到RSA3中(setup table)中确认一下此条数据是否正确,正确的话就可以按上面的步骤进行抽取了。

 

这里需要讲一下full update和delta update的不同。(也是今天上午查阅了很多资料才弄懂的。)

full update是直接从setup table中直接读取数据,所以repair full request才成为可能。

delta update是从RSA7 (queued delta)中读取数据,(数据从R3端进入SM13或者LBWQ,JOB 运行后进行RSA7);所以如果有record missing,那么RSA7中肯定不会有这个missing record的,再怎么delta repetition也没有用,只能用repair full request来修补。

 

这是一个非常实际的需求。想想,也是做为了一个BW 顾问必须了解的知识点。(果然老师只是给个方向,关键还是得靠自己呀!)

 

下面这个链接是对repair full request一个非常好的tread,SDN上的,回答得非常详细到位。

 

 

http://forums.sdn.sap.com/thread.jspa?threadID=121265&start=0&tstart=0

2025-08-19 19:17:58.177 INFO [ restartedMain] tuate.endpoint.web.EndpointLinksResolver <init>: Exposing 1 endpoint(s) beneath base path '/actuator' 2025-08-19 19:17:58.314 WARN [alina-utility-2] s.autoconfigure.OptionalLiveReloadServer startServer: Unable to start LiveReload server 2025-08-19 19:17:58.430 DEBUG SUCCESS [ restartedMain] SystemCertificateConfigMapper.selectList debug: ==> Preparing: SELECT id_,issuer,service_url,key_store_path,store_password,key_password,key_alias,certificate_path,type_,used_mode,status,created_time,created_user,last_updated_time,last_updated_user,version,serial_number,before_date,after_date,subject_dn,issuer_dn,sig_alg_name,confidence_info FROM t_system_certificate_config WHERE (status = ? AND type_ = ? AND used_mode = ?) ORDER BY last_updated_time DESC 2025-08-19 19:17:58.431 DEBUG SUCCESS [ restartedMain] SystemCertificateConfigMapper.selectList debug: ==> Parameters: 0(String), ssl(String), dev(String) 2025-08-19 19:17:58.487 DEBUG SUCCESS [ restartedMain] SystemCertificateConfigMapper.selectList debug: <== Total: 0 2025-08-19 19:17:58.506 DEBUG SUCCESS [ restartedMain] SystemCertificateConfigMapper.selectList debug: ==> Preparing: SELECT id_,issuer,service_url,key_store_path,store_password,key_password,key_alias,certificate_path,type_,used_mode,status,created_time,created_user,last_updated_time,last_updated_user,version,serial_number,before_date,after_date,subject_dn,issuer_dn,sig_alg_name,confidence_info FROM t_system_certificate_config WHERE (status = ? AND type_ = ? AND used_mode = ?) ORDER BY last_updated_time DESC 2025-08-19 19:17:58.507 DEBUG SUCCESS [ restartedMain] SystemCertificateConfigMapper.selectList debug: ==> Parameters: 0(String), ssl(String), dev(String) 2025-08-19 19:17:58.562 DEBUG SUCCESS [ restartedMain] SystemCertificateConfigMapper.selectList debug: <== Total: 0 2025-08-19 19:17:58.615 INFO [ restartedMain] scovery.provider.DiscoveryJerseyProvider <init>: Using JSON encoding codec LegacyJacksonJson 2025-08-19 19:17:58.615 INFO [ restartedMain] scovery.provider.DiscoveryJerseyProvider <init>: Using JSON decoding codec LegacyJacksonJson 2025-08-19 19:17:58.718 INFO [ restartedMain] scovery.provider.DiscoveryJerseyProvider <init>: Using XML encoding codec XStreamXml 2025-08-19 19:17:58.718 INFO [ restartedMain] scovery.provider.DiscoveryJerseyProvider <init>: Using XML decoding codec XStreamXml 2025-08-19 19:17:58.944 INFO [ restartedMain] ydb.core.internal.license.VersionPrinter info: Flyway Community Edition 7.15.0 by Redgate 2025-08-19 19:17:58.945 INFO [ restartedMain] .internal.database.base.BaseDatabaseType info: Database: jdbc:mysql://7.220.86.254:3306/parntercenter (MySQL 5.7) 2025-08-19 19:18:00.238 INFO [ restartedMain] nal.schemahistory.JdbcTableSchemaHistory info: Repair of failed migration in Schema History table `parntercenter`.`flyway_schema_history` not necessary. No failed migration detected. 2025-08-19 19:18:00.733 INFO [ restartedMain] .flywaydb.core.internal.command.DbRepair info: Successfully repaired schema history table `parntercenter`.`flyway_schema_history` (execution time 00:01.174s). 2025-08-19 19:18:01.182 INFO [ restartedMain] ydb.core.internal.license.VersionPrinter info: Flyway Community Edition 7.15.0 by Redgate 2025-08-19 19:18:02.447 INFO [ restartedMain] lywaydb.core.internal.command.DbValidate info: Successfully validated 911 migrations (execution time 00:00.589s) 2025-08-19 19:18:03.182 INFO [ restartedMain] flywaydb.core.internal.command.DbMigrate info: Current version of schema `parntercenter`: 90.6 2025-08-19 19:18:03.239 INFO [ restartedMain] flywaydb.core.internal.command.DbMigrate info: Schema `parntercenter` is up to date. No migration necessary. 2025-08-19 19:18:03.633 DEBUG SUCCESS [ restartedMain] .sysorg.mapper.SysOrgMapper.selectAllOrg debug: ==> Preparing: SELECT org_id orgId, short_org_name orgName, short_org_name_en orgNameEn, parent_id parentId, sort, industry industry, org_type orgType, org_level orgLevel, is_operation isOperation, city_id cityId, IF(expiry_time > NOW(), '1', '0') isValid FROM sys_org org 2025-08-19 19:18:03.634 DEBUG SUCCESS [ restartedMain] .sysorg.mapper.SysOrgMapper.selectAllOrg debug: ==> Parameters: 2025-08-19 19:18:03.693 DEBUG SUCCESS [ restartedMain] .sysorg.mapper.SysOrgMapper.selectAllOrg debug: <== Total: 199 2025-08-19 19:18:03.935 INFO [ restartedMain] .security.web.DefaultSecurityFilterChain <init>: Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@77dfad3b, org.springframework.security.web.session.ForceEagerSessionCreationFilter@323c042, org.springframework.security.web.session.ForceEagerSessionCreationFilter@42cd8fcf, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@7ffa94ed, org.springframework.security.web.context.SecurityContextPersistenceFilter@586f69ac, org.springframework.security.web.header.HeaderWriterFilter@233a9cd7, com.huawei.kunpeng.center.security.config.CsrfTokenResponseHeaderFilter@471155c5, org.springframework.security.web.authentication.logout.LogoutFilter@3832b7c3, org.springframework.security.web.session.ConcurrentSessionFilter@54bfad72, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1e750439, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@22f3f19b, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@4dad8e69, org.springframework.security.web.session.SessionManagementFilter@1631de42, org.springframework.security.web.access.ExceptionTranslationFilter@1b049c3b, com.huawei.kunpeng.center.security.config.handler.CustomizeAbstractSecurityInterceptor@538cb1b7, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@2a8fdc05] 2025-08-19 19:18:05.772 WARN [ restartedMain] iguration$LoadBalancerCaffeineWarnLogger logWarning: Spring Cloud LoadBalancer is currently working with the default cache. While this cache implementation is useful for development and tests, it's recommended to use Caffeine cache in production.You can switch to using Caffeine cache, by adding it and org.springframework.cache.caffeine.CaffeineCacheManager to the classpath. 2025-08-19 19:18:05.961 WARN [ restartedMain] ybatisplus.core.metadata.TableInfoHelper initTableFields: Can not find table primary key in Class: "com.huawei.kunpeng.center.entity.meetinginfo.TMeetingCOOEntity". 2025-08-19 19:18:06.331 INFO [ restartedMain] cloud.netflix.eureka.InstanceInfoFactory create: Setting initial instance status as: STARTING 2025-08-19 19:18:06.339 INFO [ restartedMain] com.netflix.discovery.DiscoveryClient <init>: Initializing Eureka in region us-east-1 2025-08-19 19:18:06.350 INFO [ restartedMain] hared.resolver.aws.ConfigClusterResolver getClusterEndpoints: Resolving eureka endpoints via configuration 2025-08-19 19:18:06.363 INFO [ restartedMain] com.netflix.discovery.DiscoveryClient fetchRegistry: Disable delta property : false 2025-08-19 19:18:06.363 INFO [ restartedMain] com.netflix.discovery.DiscoveryClient fetchRegistry: Single vip registry refresh property : null 2025-08-19 19:18:06.363 INFO [ restartedMain] com.netflix.discovery.DiscoveryClient fetchRegistry: Force full registry fetch : false 2025-08-19 19:18:06.363 INFO [ restartedMain] com.netflix.discovery.DiscoveryClient fetchRegistry: Application is null : false 2025-08-19 19:18:06.363 INFO [ restartedMain] com.netflix.discovery.DiscoveryClient fetchRegistry: Registered Applications size is zero : true 2025-08-19 19:18:06.364 INFO [ restartedMain] com.netflix.discovery.DiscoveryClient fetchRegistry: Application version is -1: true 2025-08-19 19:18:06.364 INFO [ restartedMain] com.netflix.discovery.DiscoveryClient getAndStoreFullRegistry: Getting all instance registry info from the eureka server 2025-08-19 19:18:06.564 INFO [ restartedMain] com.netflix.discovery.DiscoveryClient getAndStoreFullRegistry: The response status is 200 2025-08-19 19:18:06.567 INFO [ restartedMain] com.netflix.discovery.DiscoveryClient initScheduledTasks: Starting heartbeat executor: renew interval is: 5 2025-08-19 19:18:06.570 INFO [ restartedMain] netflix.discovery.InstanceInfoReplicator <init>: InstanceInfoReplicator onDemand update allowed rate per min is 4 2025-08-19 19:18:06.574 INFO [ restartedMain] com.netflix.discovery.DiscoveryClient <init>: Discovery Client initialized at timestamp 1755602286573 with initial instances count: 2 2025-08-19 19:18:06.586 INFO [ restartedMain] ka.serviceregistry.EurekaServiceRegistry register: Registering application PARTNER-SERVICE with eureka with status UP 2025-08-19 19:18:06.588 INFO [ restartedMain] com.netflix.discovery.DiscoveryClient notify: Saw local status change event StatusChangeEvent [timestamp=1755602286588, current=UP, previous=STARTING] 2025-08-19 19:18:06.591 INFO [nfoReplicator-0] com.netflix.discovery.DiscoveryClient register: DiscoveryClient_PARTNER-SERVICE/127.0.0.1:8080: registering service... 2025-08-19 19:18:06.591 INFO [ restartedMain] g.apache.coyote.http11.Http11NioProtocol log: Starting ProtocolHandler ["https-jsse-nio-8080"] 2025-08-19 19:18:06.605 WARN [ restartedMain] org.apache.tomcat.util.net.jsse.JSSEUtil log: Some of the specified [protocols] are not supported by the SSL engine and have been skipped: [[TLSv1.3]] 2025-08-19 19:18:06.697 INFO [nfoReplicator-0] com.netflix.discovery.DiscoveryClient register: DiscoveryClient_PARTNER-SERVICE/127.0.0.1:8080 - registration status: 204 2025-08-19 19:18:06.783 INFO [ restartedMain] .tomcat.util.net.NioEndpoint.certificate log: Connector [https-jsse-nio-8080], TLS virtual host [_default_], certificate type [UNDEFINED] configured from keystore [file:/D:/workspace/PartnerCenter/project-configs/certificate/ssl/dev/cert.pfx] using alias [tomcat] with trust store [null] 2025-08-19 19:18:06.796 INFO [ restartedMain] boot.web.embedded.tomcat.TomcatWebServer start: Tomcat started on port(s): 8080 (https) with context path '/partnerservice' 2025-08-19 19:18:06.796 INFO [ restartedMain] ceregistry.EurekaAutoServiceRegistration onApplicationEvent: Updating port to 8080 2025-08-19 19:18:09.299 ERROR [ restartedMain] plugins.DocumentationPluginsBootstrapper scanDocumentation: Unable to scan documentation context default java.lang.NullPointerException: null at springfox.documentation.schema.Example.equals(Example.java:131) ~[springfox-core-3.0.0.jar:3.0.0] at java.util.Objects.equals(Objects.java:59) ~[?:1.8.0_262] at springfox.documentation.service.RequestParameter.equals(RequestParameter.java:132) ~[springfox-core-3.0.0.jar:3.0.0] at java.util.HashMap.putVal(HashMap.java:648) ~[?:1.8.0_262] at java.util.HashMap.put(HashMap.java:612) ~[?:1.8.0_262] at java.util.HashSet.add(HashSet.java:220) ~[?:1.8.0_262] at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169) ~[?:1.8.0_262] at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_262] at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) ~[?:1.8.0_262] at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_262] at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) ~[?:1.8.0_262] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) ~[?:1.8.0_262] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) ~[?:1.8.0_262] at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_262] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_262] at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566) ~[?:1.8.0_262] at springfox.documentation.spring.web.readers.operation.OperationParameterReader.apply(OperationParameterReader.java:93) ~[springfox-spring-web-3.0.0.jar:3.0.0] at springfox.documentation.spring.web.plugins.DocumentationPluginsManager.operation(DocumentationPluginsManager.java:144) ~[springfox-spring-web-3.0.0.jar:3.0.0] at springfox.documentation.spring.web.readers.operation.ApiOperationReader.read(ApiOperationReader.java:72) ~[springfox-spring-web-3.0.0.jar:3.0.0] at springfox.documentation.spring.web.scanners.CachingOperationReader.lambda$new$0(CachingOperationReader.java:43) ~[springfox-spring-web-3.0.0.jar:3.0.0] at java.util.HashMap.computeIfAbsent(HashMap.java:1127) ~[?:1.8.0_262] at springfox.documentation.spring.web.scanners.CachingOperationReader.read(CachingOperationReader.java:48) ~[springfox-spring-web-3.0.0.jar:3.0.0] at springfox.documentation.spring.web.scanners.ApiDescriptionReader.read(ApiDescriptionReader.java:72) ~[springfox-spring-web-3.0.0.jar:3.0.0] at springfox.documentation.spring.web.scanners.ApiListingScanner.scan(ApiListingScanner.java:169) ~[springfox-spring-web-3.0.0.jar:3.0.0] at springfox.documentation.spring.web.scanners.ApiDocumentationScanner.scan(ApiDocumentationScanner.java:67) ~[springfox-spring-web-3.0.0.jar:3.0.0] at springfox.documentation.spring.web.plugins.AbstractDocumentationPluginsBootstrapper.scanDocumentation(AbstractDocumentationPluginsBootstrapper.java:96) ~[springfox-spring-web-3.0.0.jar:3.0.0] at springfox.documentation.spring.web.plugins.AbstractDocumentationPluginsBootstrapper.bootstrapDocumentationPlugins(AbstractDocumentationPluginsBootstrapper.java:82) ~[springfox-spring-web-3.0.0.jar:3.0.0] at springfox.documentation.spring.web.plugins.DocumentationPluginsBootstrapper.start(DocumentationPluginsBootstrapper.java:100) ~[springfox-spring-web-3.0.0.jar:3.0.0] at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:179) ~[spring-context-5.3.31.jar:5.3.31] at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) ~[spring-context-5.3.31.jar:5.3.31] at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:357) ~[spring-context-5.3.31.jar:5.3.31] at java.lang.Iterable.forEach(Iterable.java:75) ~[?:1.8.0_262] at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:156) ~[spring-context-5.3.31.jar:5.3.31] at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:124) ~[spring-context-5.3.31.jar:5.3.31] at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:946) ~[spring-context-5.3.31.jar:5.3.31] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:594) ~[spring-context-5.3.31.jar:5.3.31] at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:147) ~[spring-boot-2.7.18.jar:2.7.18] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:732) ~[spring-boot-2.7.18.jar:2.7.18] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:409) ~[spring-boot-2.7.18.jar:2.7.18] at org.springframework.boot.SpringApplication.run(SpringApplication.java:308) ~[spring-boot-2.7.18.jar:2.7.18] at com.huawei.kunpeng.center.PartnerApplication.main(PartnerApplication.java:72) ~[classes/:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_262] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_262] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_262] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_262] at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:50) ~[spring-boot-devtools-2.7.18.jar:2.7.18] 2025-08-19 19:18:09.343 INFO SUCCESS [ restartedMain] huawei.kunpeng.center.PartnerApplication logStarted: Started PartnerApplication in 84.453 seconds (JVM running for 85.594) 2025-08-19 19:18:09.359 INFO SUCCESS [ restartedMain] huawei.kunpeng.center.PartnerApplication loadLocalCache: PartnerApplication.loadLocalCache load cache start!!! 2025-08-19 19:18:09.370 DEBUG SUCCESS [ restartedMain] .excel.ExcelImportTitleMapper.selectList debug: ==> Preparing: SELECT id,sheet_name,parent_title_name,title_name,is_need,is_unique,entity_class_name AS className,service_class_name,field_name,field_enum,field_enum_mapping,split_char,is_public_default,dict_id,del_flag,create_time AS createdTime,last_updated_time FROM t_excel_title_head WHERE (del_flag = ?) 2025-08-19 19:18:09.371 DEBUG SUCCESS [ restartedMain] .excel.ExcelImportTitleMapper.selectList debug: ==> Parameters: 0(Integer) 2025-08-19 19:18:09.489 DEBUG SUCCESS [ restartedMain] .excel.ExcelImportTitleMapper.selectList debug: <== Total: 102 2025-08-19 19:18:09.510 DEBUG SUCCESS [ restartedMain] .excel.ExcelExportTitleMapper.selectList debug: ==> Preparing: SELECT id,row_num,field,title,config_name,row_span,col_span,width,high,align,foreground_color,font_color,is_dict,del_flag AS isDel FROM t_excel_export_head WHERE (del_flag = ?) 2025-08-19 19:18:09.511 DEBUG SUCCESS [ restartedMain] .excel.ExcelExportTitleMapper.selectList debug: ==> Parameters: 0(Integer) 2025-08-19 19:18:09.638 DEBUG SUCCESS [ restartedMain] .excel.ExcelExportTitleMapper.selectList debug: <== Total: 1037 2025-08-19 19:18:09.678 DEBUG SUCCESS [ restartedMain] onaryDetailTreeMapper.selectDictByDictId debug: ==> Preparing: select * from t_dictionary_detail where status_=0 2025-08-19 19:18:09.679 DEBUG SUCCESS [ restartedMain] onaryDetailTreeMapper.selectDictByDictId debug: ==> Parameters: 2025-08-19 19:18:10.074 DEBUG SUCCESS [ restartedMain] onaryDetailTreeMapper.selectDictByDictId debug: <== Total: 8606 2025-08-19 19:18:11.741 DEBUG SUCCESS [ restartedMain] per.TDictionaryTreeMapper.selectDictList debug: ==> Preparing: select t.* from t_dictionary t where 1=1 and status_!=-1 and id_ != 176 2025-08-19 19:18:11.741 DEBUG SUCCESS [ restartedMain] per.TDictionaryTreeMapper.selectDictList debug: ==> Parameters: 2025-08-19 19:18:11.870 DEBUG SUCCESS [ restartedMain] per.TDictionaryTreeMapper.selectDictList debug: <== Total: 532 2025-08-19 19:18:11.895 DEBUG SUCCESS [ restartedMain] mapper.AdaptationProjectMapper.getRegion debug: ==> Preparing: SELECT id_ id, parent_id parentId, name_ `name`, area_level level FROM t_administrative_region WHERE id_ != '86' and status_ = '0' 2025-08-19 19:18:11.895 DEBUG SUCCESS [ restartedMain] mapper.AdaptationProjectMapper.getRegion debug: ==> Parameters: 2025-08-19 19:18:12.016 DEBUG SUCCESS [ restartedMain] mapper.AdaptationProjectMapper.getRegion debug: <== Total: 3671 2025-08-19 19:18:12.066 INFO SUCCESS [ restartedMain] huawei.kunpeng.center.PartnerApplication loadLocalCache: PartnerApplication.loadLocalCache load cache end!!! 2025-08-19 19:18:12.066 INFO SUCCESS [ restartedMain] huawei.kunpeng.center.PartnerApplication loadLocalCache: PartnerApplication.loadLocalCache load cache start!!! 2025-08-19 19:18:12.066 DEBUG SUCCESS [ restartedMain] .excel.ExcelImportTitleMapper.selectList debug: ==> Preparing: SELECT id,sheet_name,parent_title_name,title_name,is_need,is_unique,entity_class_name AS className,service_class_name,field_name,field_enum,field_enum_mapping,split_char,is_public_default,dict_id,del_flag,create_time AS createdTime,last_updated_time FROM t_excel_title_head WHERE (del_flag = ?) 2025-08-19 19:18:12.066 DEBUG SUCCESS [ restartedMain] .excel.ExcelImportTitleMapper.selectList debug: ==> Parameters: 0(Integer) 2025-08-19 19:18:12.129 DEBUG SUCCESS [ restartedMain] .excel.ExcelImportTitleMapper.selectList debug: <== Total: 102 2025-08-19 19:18:12.131 DEBUG SUCCESS [ restartedMain] .excel.ExcelExportTitleMapper.selectList debug: ==> Preparing: SELECT id,row_num,field,title,config_name,row_span,col_span,width,high,align,foreground_color,font_color,is_dict,del_flag AS isDel FROM t_excel_export_head WHERE (del_flag = ?) 2025-08-19 19:18:12.131 DEBUG SUCCESS [ restartedMain] .excel.ExcelExportTitleMapper.selectList debug: ==> Parameters: 0(Integer) 2025-08-19 19:18:12.197 DEBUG SUCCESS [ restartedMain] .excel.ExcelExportTitleMapper.selectList debug: <== Total: 1037 2025-08-19 19:18:12.201 DEBUG SUCCESS [ restartedMain] onaryDetailTreeMapper.selectDictByDictId debug: ==> Preparing: select * from t_dictionary_detail where status_=0 2025-08-19 19:18:12.203 DEBUG SUCCESS [ restartedMain] onaryDetailTreeMapper.selectDictByDictId debug: ==> Parameters: 2025-08-19 19:18:12.499 DEBUG SUCCESS [ restartedMain] onaryDetailTreeMapper.selectDictByDictId debug: <== Total: 8606 2025-08-19 19:18:13.996 DEBUG SUCCESS [ restartedMain] per.TDictionaryTreeMapper.selectDictList debug: ==> Preparing: select t.* from t_dictionary t where 1=1 and status_!=-1 and id_ != 176 2025-08-19 19:18:13.997 DEBUG SUCCESS [ restartedMain] per.TDictionaryTreeMapper.selectDictList debug: ==> Parameters: 2025-08-19 19:18:14.111 DEBUG SUCCESS [ restartedMain] per.TDictionaryTreeMapper.selectDictList debug: <== Total: 532 2025-08-19 19:18:14.133 INFO SUCCESS [ restartedMain] huawei.kunpeng.center.PartnerApplication loadLocalCache: PartnerApplication.loadLocalCache load cache end!!! 2025-08-19 19:18:14.647 INFO [1)-10.165.36.79] e.[Tomcat].[localhost].[/partnerservice] log: Initializing Spring DispatcherServlet 'dispatcherServlet' 2025-08-19 19:18:14.647 INFO [1)-10.165.36.79] gframework.web.servlet.DispatcherServlet initServletBean: Initializing Servlet 'dispatcherServlet' 2025-08-19 19:18:14.647 INFO [1)-10.165.36.79] gframework.web.servlet.DispatcherServlet initServletBean: Completed initialization in 0 ms 2025-08-19 19:18:14.718 INFO [4)-10.165.36.79] com.alibaba.druid.pool.DruidDataSource init: {dataSource-3} inited 2025-08-19 19:18:15.326 INFO [4)-10.165.36.79] com.alibaba.druid.pool.DruidDataSource init: {dataSource-4} inited
08-20
<think>好的,我现在要分析用户提供的SQL查询,确定它是左连接还是右连接。首先,我得回忆一下SQL中连接的基本知识。左连接(LEFT JOIN)和右连接(RIGHT JOIN)的区别在于保留哪边的所有记录,左连接保留左边表的全部,右连接则保留右边的。不过用户这里用的是旧式的逗号分隔表,并在WHERE子句中使用了(+)操作符,这可能是Oracle的语法。 接下来,我需要仔细查看FROM子句和WHERE条件。用户写的FROM部分是MET_REPAIR_DEVICE和FAS_ASSETS_INFORMATION两个表,用逗号连接,这通常表示交叉连接或者隐式内连接,但如果有(+)操作符的话,情况就不同了。在WHERE子句中,条件写成MET_REPAIR_DEVICE.ACC_ASSET_PK_ID = FAS_ASSETS_INFORMATION.PK_ID (+)。根据Oracle的旧式外连接语法,(+)放在右边表示左连接,即左边的表(MET_REPAIR_DEVICE)的所有记录都会被保留,而右边的表(FAS_ASSETS_INFORMATION)可能会有NULL值填充。这样,这个查询实际上是一个左外连接,等效于使用LEFT JOIN关键字的形式。 不过,我需要确认是否正确理解了(+)的位置。在Oracle中,当在等号的某一侧使用(+)时,它指示该侧的表在连接中可以缺少匹配的行,即相反侧的表会保留所有行。例如,如果A.column = B.column(+),那么这就是一个左连接,保留A的所有行,而B中没有匹配的部分用NULL。因此,用户提供的SQL确实是一个左连接。因此,最终结论是这是一个左外连接。</think>这是一个使用Oracle旧式语法(+)实现的**左外连接(LEFT JOIN)**。 ### 详细分析: 1. **FROM子句**: ```sql FROM MET_REPAIR_DEVICE, FAS_ASSETS_INFORMATION ``` 使用逗号分隔表,表示隐式的交叉连接(CROSS JOIN),但通过WHERE子句的`(+)`实现了外连接。 2. **WHERE条件**: ```sql WHERE MET_REPAIR_DEVICE.ACC_ASSET_PK_ID = FAS_ASSETS_INFORMATION.PK_ID (+) ``` - `(+)`符号在Oracle中表示外连接的**被补充表**(即可能填充NULL值的表)。 - `(+)`出现在`FAS_ASSETS_INFORMATION.PK_ID`右侧,意味着: - **主表**是`MET_REPAIR_DEVICE`(保留所有行)。 - **被补充表**是`FAS_ASSETS_INFORMATION`(匹配不到时填充NULL)。 3. **等效的现代标准SQL写法**: ```sql SELECT MET_REPAIR_DEVICE.PK_ID, MET_REPAIR_DEVICE.DEVICE_CODE, FAS_ASSETS_INFORMATION.ASSET_NO, FAS_ASSETS_INFORMATION.ASSET_NAME, MET_REPAIR_DEVICE.IS_VALID FROM MET_REPAIR_DEVICE LEFT JOIN FAS_ASSETS_INFORMATION ON MET_REPAIR_DEVICE.ACC_ASSET_PK_ID = FAS_ASSETS_INFORMATION.PK_ID ``` ### 总结: - 这是一个**左外连接**。 - 会返回`MET_REPAIR_DEVICE`表的所有行,即使`FAS_ASSETS_INFORMATION`中没有匹配的记录(此时`ASSET_NO`、`ASSET_NAME`等字段为NULL)。 - 右连接(RIGHT JOIN)的旧式写法会将`(+)`放在左侧列。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值