openauth.core 使用MS SQL 2008的问题

由于SQLSERVER2008不支持FETCH分页方式,需要通过配置使用rownumber进行分页。最佳解决方案是将SQLSERVER升级到2012或更高版本。OpenAuth.Core是一个基于.NetCore2.1的快速开发框架。

因为SQL SERVER 2008 不支持FETCH分页方式,所以需要修改配置:

optionsBuilder.UseSqlServer("Data Source=.;Initial Catalog=OpenAuthDB;User=sa;Password=123456;Integrated Security=True;", b => b.UseRowNumberForPaging());

 改为rownumber的分页。

当然最好的解决方式是升级SQL SERVER到2012或以上版本。


OpenAuth.Core是 千星项目OpenAuth.Net基于.Net Core 2.1的快速开发框架。一直致力于做最好用的.NET权限工作流框架。核心模块包括:组织机构、角色用户、权限授权、表单设计、工作流等。它的架构精良易于扩展,是中小企业的首选。 http://www.openauth.me/

 

OpenAuth.Core是一个.Net Core快速应用开发框架、好用的权限工作流系统。基于经典领域驱动设计的权限管理及快速开发框架,源于Martin Fowler企业级应用开发思想及最新技术组合(IdentityServer、EF core、Quartz、AutoFac、WebAPI、Swagger、Mock、NUnit、VUE、Element-ui等)。已成功在docker/jenkins中实施。核心模块包括:组织机构、角色用户、权限授权、表单设计、工作流等。它的架构精良易于扩展,是中小企业的首选。 OpenAuth.Core特点: 1、支持.net core sdk 3.1.100 2、超强的自定义权限控制功能,可灵活配置用户、角色可访问的数据权限。请参考:通用权限设计与实现 3、完整的字段权限控制,可以控制字段可见及API是否返回字段值 4、可拖拽的表单设计 5、可视化流程设计 6、基于Quartz.Net的定时任务控制,可随时启/停,可视化配置Cron表达式功能 7、基于CodeSmith的代码生成功能,可快速生成带有头/明细结构的页面 8、支持sqlserver、mysql数据库,理论上支持所有数据库 9、集成IdentityServer4,实现基于OAuth2的登录体系 10、建立三方对接规范,已有系统可以无缝对接流程引擎 11、前端采用vue + layui + elementUI + ztree + gooflow + leipiformdesign 12、后端采用.net core +EF core+ autofac + quartz +IdentityServer4 + nunit + swagger 13、设计工具PowerDesigner + Enterprise Architect 系统工程结构: 1、Infrastructure 通用工具集合 2、OpenAuth.Repository 系统仓储层,用于数据库操作 3、OpenAuth.App 应用层,为界面提供接口 4、OpenAuth.Mvc Web站点 5、OpenAuth.WebApi 为企业版或其他三方系统提供接口服务 6、OpenAuth.Identity 基于IdentityServer4的单点登录服务   OpenAuth.Core 更新日志: v3.2 增加在swagger界面查看接口调用时间及SQL执行时间;
root@ecm-e00f:/yunxiang/data/nacos/conf# vim application.properties root@ecm-e00f:/yunxiang/data/nacos/conf# cat application.properties # # Copyright 1999-2021 Alibaba Group Holding Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # #*************** Spring Boot Related Configurations ***************# ### Default web context path: server.servlet.contextPath=/nacos ### Include message field server.error.include-message=ALWAYS ### Default web server port: server.port=8848 #*************** Network Related Configurations ***************# ### If prefer hostname over ip for Nacos server addresses in cluster.conf: # nacos.inetutils.prefer-hostname-over-ip=false ### Specify local server's IP: # nacos.inetutils.ip-address= #*************** Config Module Related Configurations ***************# ### If use MySQL as datasource: ### Deprecated configuration property, it is recommended to use `spring.sql.init.platform` replaced. spring.sql.init.platform=mariadb ### Count of DB: db.num=1 ### Connect URL of DB: db.url.0=jdbc:mariadb://127.0.0.1:3306/transport_config?useUnicode=true&characterEncoding=utf8&serverTimezone=UTC&useSSL=false db.user.0=transport_config_user db.password.0=Yx_sec_Db@911^38 db.driver.0=org.mariadb.jdbc.Driver ### Connection pool configuration: hikariCP db.pool.config.connectionTimeout=30000 db.pool.config.validationTimeout=10000 db.pool.config.maximumPoolSize=20 db.pool.config.minimumIdle=2 ### the maximum retry times for push nacos.config.push.maxRetryTime=50 #*************** Naming Module Related Configurations ***************# ### If enable data warmup. If set to false, the server would accept request without local data preparation: # nacos.naming.data.warmup=true ### If enable the instance auto expiration, kind like of health check of instance: # nacos.naming.expireInstance=true nacos.naming.empty-service.auto-clean=true nacos.naming.empty-service.clean.initial-delay-ms=50000 nacos.naming.empty-service.clean.period-time-ms=30000 ### Add in 2.0.0 ### The interval to clean empty service, unit: milliseconds. # nacos.naming.clean.empty-service.interval=60000 ### The expired time to clean empty service, unit: milliseconds. # nacos.naming.clean.empty-service.expired-time=60000 ### The interval to clean expired metadata, unit: milliseconds. # nacos.naming.clean.expired-metadata.interval=5000 ### The expired time to clean metadata, unit: milliseconds. # nacos.naming.clean.expired-metadata.expired-time=60000 ### The delay time before push task to execute from service changed, unit: milliseconds. # nacos.naming.push.pushTaskDelay=500 ### The timeout for push task execute, unit: milliseconds. # nacos.naming.push.pushTaskTimeout=5000 ### The delay time for retrying failed push task, unit: milliseconds. # nacos.naming.push.pushTaskRetryDelay=1000 ### Since 2.0.3 ### The expired time for inactive client, unit: milliseconds. # nacos.naming.client.expired.time=180000 #*************** CMDB Module Related Configurations ***************# ### The interval to dump external CMDB in seconds: # nacos.cmdb.dumpTaskInterval=3600 ### The interval of polling data change event in seconds: # nacos.cmdb.eventTaskInterval=10 ### The interval of loading labels in seconds: # nacos.cmdb.labelTaskInterval=300 ### If turn on data loading task: # nacos.cmdb.loadDataAtStart=false #***********Metrics for tomcat **************************# server.tomcat.mbeanregistry.enabled=true #***********Expose prometheus and health **************************# #management.endpoints.web.exposure.include=prometheus,health ### Metrics for elastic search management.metrics.export.elastic.enabled=false #management.metrics.export.elastic.host=http://localhost:9200 ### Metrics for influx management.metrics.export.influx.enabled=false #management.metrics.export.influx.db=springboot #management.metrics.export.influx.uri=http://localhost:8086 #management.metrics.export.influx.auto-create-db=true #management.metrics.export.influx.consistency=one #management.metrics.export.influx.compressed=true #*************** Access Log Related Configurations ***************# ### If turn on the access log: server.tomcat.accesslog.enabled=true ### file name pattern, one file per hour server.tomcat.accesslog.rotate=true server.tomcat.accesslog.file-date-format=.yyyy-MM-dd-HH ### The access log pattern: server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i ### The directory of access log: server.tomcat.basedir=file:. #*************** Access Control Related Configurations ***************# ### If enable spring security, this option is deprecated in 1.2.0: #spring.security.enabled=false ### The ignore urls of auth nacos.security.ignore.urls=/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-ui/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/** ### The auth system to use, currently only 'nacos' and 'ldap' is supported: nacos.core.auth.system.type=nacos ### If turn on auth system: nacos.core.auth.enabled=false ### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay. nacos.core.auth.caching.enabled=true ### Since 1.4.1, Turn on/off white auth for user-agent: nacos-server, only for upgrade from old version. nacos.core.auth.enable.userAgentAuthWhite=false ### Since 1.4.1, worked when nacos.core.auth.enabled=true and nacos.core.auth.enable.userAgentAuthWhite=false. ### The two properties is the white list for auth and used by identity the request from other server. nacos.core.auth.server.identity.key= nacos.core.auth.server.identity.value= ### worked when nacos.core.auth.system.type=nacos ### The token expiration in seconds: nacos.core.auth.plugin.nacos.token.cache.enable=false nacos.core.auth.plugin.nacos.token.expire.seconds=18000 ### The default token (Base64 String): nacos.core.auth.plugin.nacos.token.secret.key= ### worked when nacos.core.auth.system.type=ldap,{0} is Placeholder,replace login username #nacos.core.auth.ldap.url=ldap://localhost:389 #nacos.core.auth.ldap.basedc=dc=example,dc=org #nacos.core.auth.ldap.userDn=cn=admin,${nacos.core.auth.ldap.basedc} #nacos.core.auth.ldap.password=admin #nacos.core.auth.ldap.userdn=cn={0},dc=example,dc=org #nacos.core.auth.ldap.filter.prefix=uid #nacos.core.auth.ldap.case.sensitive=true #nacos.core.auth.ldap.ignore.partial.result.exception=false #*************** Control Plugin Related Configurations ***************# # plugin type #nacos.plugin.control.manager.type=nacos # local control rule storage dir, default ${nacos.home}/data/connection and ${nacos.home}/data/tps #nacos.plugin.control.rule.local.basedir=${nacos.home} # external control rule storage type, if exist #nacos.plugin.control.rule.external.storage= #*************** Config Change Plugin Related Configurations ***************# # webhook #nacos.core.config.plugin.webhook.enabled=false # It is recommended to use EB https://help.aliyun.com/document_detail/413974.html #nacos.core.config.plugin.webhook.url=http://localhost:8080/webhook/send?token=*** # The content push max capacity ,byte #nacos.core.config.plugin.webhook.contentMaxCapacity=102400 # whitelist #nacos.core.config.plugin.whitelist.enabled=false # The import file suffixs #nacos.core.config.plugin.whitelist.suffixs=xml,text,properties,yaml,html # fileformatcheck,which validate the import file of type and content #nacos.core.config.plugin.fileformatcheck.enabled=false #*************** Istio Related Configurations ***************# ### If turn on the MCP server: nacos.istio.mcp.server.enabled=false #*************** Core Related Configurations ***************# ### set the WorkerID manually # nacos.core.snowflake.worker-id= ### Member-MetaData # nacos.core.member.meta.site= # nacos.core.member.meta.adweight= # nacos.core.member.meta.weight= ### MemberLookup ### Addressing pattern category, If set, the priority is highest # nacos.core.member.lookup.type=[file,address-server] ## Set the cluster list with a configuration file or command-line argument # nacos.member.list=192.168.16.101:8847?raft_port=8807,192.168.16.101?raft_port=8808,192.168.16.101:8849?raft_port=8809 ## for AddressServerMemberLookup # Maximum number of retries to query the address server upon initialization # nacos.core.address-server.retry=5 ## Server domain name address of [address-server] mode # address.server.domain=jmenv.tbsite.net ## Server port of [address-server] mode # address.server.port=8080 ## Request address of [address-server] mode # address.server.url=/nacos/serverlist #*************** JRaft Related Configurations ***************# ### Sets the Raft cluster election timeout, default value is 5 second # nacos.core.protocol.raft.data.election_timeout_ms=5000 ### Sets the amount of time the Raft snapshot will execute periodically, default is 30 minute # nacos.core.protocol.raft.data.snapshot_interval_secs=30 ### raft internal worker threads # nacos.core.protocol.raft.data.core_thread_num=8 ### Number of threads required for raft business request processing # nacos.core.protocol.raft.data.cli_service_thread_num=4 ### raft linear read strategy. Safe linear reads are used by default, that is, the Leader tenure is confirmed by heartbeat # nacos.core.protocol.raft.data.read_index_type=ReadOnlySafe ### rpc request timeout, default 5 seconds # nacos.core.protocol.raft.data.rpc_request_timeout_ms=5000 #*************** Distro Related Configurations ***************# ### Distro data sync delay time, when sync task delayed, task will be merged for same data key. Default 1 second. # nacos.core.protocol.distro.data.sync.delayMs=1000 ### Distro data sync timeout for one sync data, default 3 seconds. # nacos.core.protocol.distro.data.sync.timeoutMs=3000 ### Distro data sync retry delay time when sync data failed or timeout, same behavior with delayMs, default 3 seconds. # nacos.core.protocol.distro.data.sync.retryDelayMs=3000 ### Distro data verify interval time, verify synced data whether expired for a interval. Default 5 seconds. # nacos.core.protocol.distro.data.verify.intervalMs=5000 ### Distro data verify timeout for one verify, default 3 seconds. # nacos.core.protocol.distro.data.verify.timeoutMs=3000 ### Distro data load retry delay when load snapshot data failed, default 30 seconds. # nacos.core.protocol.distro.data.load.retryDelayMs=30000 ### enable to support prometheus service discovery #nacos.prometheus.metrics.enabled=true ### Since 2.3 #*************** Grpc Configurations ***************# ## sdk grpc(between nacos server and client) configuration ## Sets the maximum message size allowed to be received on the server. #nacos.remote.server.grpc.sdk.max-inbound-message-size=10485760 ## Sets the time(milliseconds) without read activity before sending a keepalive ping. The typical default is two hours. #nacos.remote.server.grpc.sdk.keep-alive-time=7200000 ## Sets a time(milliseconds) waiting for read activity after sending a keepalive ping. Defaults to 20 seconds. #nacos.remote.server.grpc.sdk.keep-alive-timeout=20000 ## Sets a time(milliseconds) that specify the most aggressive keep-alive time clients are permitted to configure. The typical default is 5 minutes #nacos.remote.server.grpc.sdk.permit-keep-alive-time=300000 ## cluster grpc(inside the nacos server) configuration #nacos.remote.server.grpc.cluster.max-inbound-message-size=10485760 ## Sets the time(milliseconds) without read activity before sending a keepalive ping. The typical default is two hours. #nacos.remote.server.grpc.cluster.keep-alive-time=7200000 ## Sets a time(milliseconds) waiting for read activity after sending a keepalive ping. Defaults to 20 seconds. #nacos.remote.server.grpc.cluster.keep-alive-timeout=20000 ## Sets a time(milliseconds) that specify the most aggressive keep-alive time clients are permitted to configure. The typical default is 5 minutes #nacos.remote.server.grpc.cluster.permit-keep-alive-time=300000 ## open nacos default console ui #nacos.console.ui.enabled=true
07-08
2025-10-09 16:45:15,219 [(pro) WebContainer : [Anonymous][]-2050016273-7bfcb9bfbb-43-T:adc78117-91bd-4039-8ac3-5f4bb5671e27] ERROR SsoFilter: - null org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: Cannot create PoolableConnectionFactory (FATAL: Invalid [********] at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:313) ~[spring-jdbc-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.transaction.support.AbstractPlatformTransactionManager.startTransaction(AbstractPlatformTransactionManager.java:400) ~[spring-tx-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:373) ~[spring-tx-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:595) ~[spring-tx-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:382) ~[spring-tx-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:119) ~[spring-tx-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175) ~[spring-aop-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:762) ~[spring-aop-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:123) ~[spring-tx-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:388) ~[spring-tx-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:119) ~[spring-tx-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175) ~[spring-aop-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:762) ~[spring-aop-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) ~[spring-aop-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:762) ~[spring-aop-5.3.39-h2.jar!/:5.3.39-h2] at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:707) ~[spring-aop-5.3.39-h2.jar!/:5.3.39-h2] at com.huawei.it.jalor5.security.service.impl.LoginService$$EnhancerBySpringCGLIB$$18b7c081.initUser[********] at com.huawei.it.jalor5.web.support.filter.SsoFilter.processInitUser(SsoFilter.java:174) ~[jalor-auth*****#*#*****.0.RELEASE.jar!/:[********].RELEASE] at com.huawei.it.jalor5.web.support.filter.SsoFilter.access$300(SsoFilter.java:51) ~[jalor-auth*****#*#*****.0.RELEASE.jar!/:[********].RELEASE] at com.huawei.it.jalor5.web.support.filter.SsoFilter$1.execute(SsoFilter.java:142) ~[jalor-auth*****#*#*****.0.RELEASE.jar!/:[********].RELEASE] at com.huawei.it.jalor5.core.exception.impl.LogAndThrowBox.run(LogAndThrowBox.java:25) ~[jalor-core-[********]-SP15.RELEASE.jar!/:[********]-SP15.RELEASE] at com.huawei.it.jalor5.web.support.filter.SsoFilter.doAfterSsoInfoBuilt(SsoFilter.java:114) ~[jalor-auth*****#*#*****.0.RELEASE.jar!/:[********].RELEASE] at com.huawei.his.idaas.sso.validate.UserAuth*****#*#*****er[********] at com.huawei.his.idaas.sso.validate.UserAuth*****#*#*****o[********] at com.huawei.his.idaas.sso.filter.AbstractSsoFilter.doFilter(AbstractSsoFilter.java:109) ~[sso-sdk-2.2.8.jar!/:?] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:168) ~[tomcat-embed-core-9.0.108.jar!/:?] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:144) ~[tomcat-embed-core-9.0.108.jar!/:?] at com.huawei.it.jalor5.security.filter.JwtRequestFilter.skipJwtAuth*****#*#*****er.java:176) ~[jalor-auth*****#*#*****LEASE.jar!/:[********].RELEASE] at com.huawei.it.jalor5.security.filter.JwtRequestFilter.doFilter(JwtRequestFilter.java:123) ~[jalor-auth*****#*#*****LEASE.jar!/:[********].RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:168) ~[tomcat-embed-core-9.0.108.jar!/:?] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:144) ~[tomcat-embed-core-9.0.108.jar!/:?] at com.huawei.it.jalor5.web.support.filter.RequestContextFilter.doFilter(RequestContextFilter.java:84) ~[jalor-web-[********].RELEASE.jar!/:[********].RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:168) ~[tomcat-embed-core-9.0.108.jar!/:?] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:144) ~[tomcat-embed-core-9.0.108.jar!/:?] at com.huawei.it.jalor5.web.support.filter.CharaterEncodingFilter.doFilter(CharaterEncodingFilter.java:31) ~[jalor-web-[********].RELEASE.jar!/:[********].RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:168) ~[tomcat-embed-core-9.0.108.jar!/:?这个是什么问题
最新发布
10-10
21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.gpg.application.cleaner.interval-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.output.fileoutputformat.compress 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.gpg.subcluster.cleaner.interval-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.sharedcache.store.in-memory.staleness-period-mins 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.write.byte-array-manager.count-limit 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.runtime.linux.runc.layer-mounts-to-keep 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.group.mapping.providers.combined 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.running.map.limit 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.webapp.address 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.placement-constraints.scheduler.pool-size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.multipart.size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.slow.io.warning.threshold.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.app.mapreduce.am.job.committer.commit-window 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.submithostname 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.edits.asynclogging 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.blockreport.incremental.intervalMsec 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.ifile.readahead 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.state-store.sql.conn-time-out 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.runtime.linux.runc.image-tag-to-manifest-plugin 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.socketcache.capacity 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.select.input.csv.field.delimiter 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.retry.policy.spec 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.reencrypt.batch.size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.connection.ssl.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.proxyuser.hadoop.hosts 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.read.considerLoad 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.max.slowdisks.to.exclude 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.groups.cache.secs 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.peer.stats.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.replication 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.storage.policy.satisfier.work.multiplier.per.iteration 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.group.mapping.ldap.directory.search.timeout 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.checksum.combine.mode 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.storage.policy.satisfier.max.outstanding.paths 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.sleep-delay-before-sigkill.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.apps.cache.enable 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.automatic.close 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.reencrypt.edek.threads 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.disk-health-checker.disk-free-space-threshold.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.acls.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.short.circuit.replica.stale.threshold.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.health-checker.run-before-startup 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.send.qop.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.jobhistory.intermediate-done-dir 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.slowpeer.collect.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.server-defaults.validity.period.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.client.libjars.wildcard 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.storage.policy.satisfier.address 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.reduce.shuffle.input.buffer.percent 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.audit.loggers 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for io.serializations 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.dispatcher.print-thread-pool.keep-alive-time 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.http.cross-origin.allowed-methods 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.snapshot.capture.openfiles 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.qjournal.queued-edits.limit.mb 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.zk.acl 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.container.stderr.pattern 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.cluster.local.dir 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for ipc.[port_number].cost-provider.impl 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.kerberos.kinit.command 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.metrics.logger.period.seconds 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.viewfs.overload.scheme.target.abfss.impl 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.block.access.token.lifetime 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.delegation.token.max-lifetime 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.drop.cache.behind.writes 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.system-metrics-publisher.timeline-server-v1.enable-batch 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.remove.dead.datanode.batchnum 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.submission-preprocessor.file-refresh-interval-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.num.extra.edits.retained 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.block.placement.ec.classname 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for ipc.client.connect.max.retries.on.timeouts 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.client.resolve.topology.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.qjournal.http.open.timeout.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for ha.health-monitor.connect-retry-interval.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.edekcacheloader.initial.delay.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.rbf.observer.read.enable 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.failover.resolver.useFQDN 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for io.mapfile.bloom.size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.ftp.data.connection.mode 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client-write-packet-size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.app.mapreduce.shuffle.log.backups 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.kerberos.principal.pattern 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.webhdfs.socket.connect-timeout 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.scheduler.monitor.enable 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.proxyuser.hadoop.groups 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.select.output.csv.quote.character 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.task.stuck.timeout-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.authorization 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.timeline-service.version 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.am.liveness-monitor.expiry-interval-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.gpg.webapp.address 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.timeline-service.leveldb-timeline-store.path 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.reduce.slowstart.completedmaps 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.delegation.token.max-lifetime 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.ha.automatic-failover.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.socket.write.timeout 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.accesstime.precision 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.group.mapping.ldap.conversion.rule 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for io.mapfile.bloom.error.rate 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.webapp.rest-csrf.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.timeline-service.leveldb-state-store.path 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.scheduler.configuration.zk-store.parent-path 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for ipc.[port_number].backoff.enable 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.timeline-service.writer.flush-interval-seconds 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.posix.acl.inheritance.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.outliers.report.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.kms.client.encrypted.key.cache.low-watermark 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.top.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.retry.throttle.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.jobhistory.webapp.rest-csrf.custom-header 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.webapp.xfs-filter.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for ipc.identity-provider.impl 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.cached.conn.retry 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.submission-preprocessor.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.system.tags 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.runtime.linux.runc.image-tag-to-manifest-plugin.num-manifests-to-cache 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.least-load-policy-selector.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.numa-awareness.numactl.cmd 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.path.based.cache.refresh.interval.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.fs-limits.max-directory-items 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.ha.log-roll.period 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.distributed-scheduling.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.pmem.cache.recovery 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.minicluster.fixed.ports 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.storage.policy.satisfier.queue.limit 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.snapshot.filesystem.limit 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.resource.percentage-physical-cpu-limit 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.fs-limits.max-xattr-size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.blocks.per.postponedblocks.rescan 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.maintenance.replication.min 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.timeline-service.app-aggregation-interval-secs 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.max.op.size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.iostatistics.thread.level.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.reducer.unconditional-preempt.delay.sec 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.app.mapreduce.am.hard-kill-timeout-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.connection.ttl 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.storage.policy.permissions.superuser-only 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.df.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.cache.limit.max-single-resource-mb 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.assumed.role.session.duration 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.disk.balancer.block.tolerance.percent 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.webhdfs.netty.high.watermark 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.balance.max.concurrent.moves 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.log.delete.threshold 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.token.tracking.ids.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.assumed.role.credentials.provider 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.log-container-debug-info-on-error.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.kms.client.failover.sleep.max.millis 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.webapp.rest-csrf.custom-header 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.jobhistory.move.thread-count 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for io.compression.codec.zstd.level 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.timeline-service.http-authentication.simple.anonymous.allowed 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.provided.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.sharedcache.client-server.thread-count 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.scheduler.configuration.max.version 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.jobhistory.jobname.limit 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.dispatcher.print-events-info.threshold 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.decommission.blocks.per.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.qjournal.write-txns.timeout.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.subcluster-resolver.class 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.read-lock-reporting-threshold-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.task.timeout 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.resource.memory-mb 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.container-log-monitor.total-size-limit-bytes 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.fileoutputcommitter.algorithm.version 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.framework.name 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.router.clientrm.interceptor-class.pipeline 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.system-metrics-publisher.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.sharedcache.nested-level 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.dns.log-slow-lookups.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.jobhistory.webapp.https.address 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for file.client-write-packet-size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for ipc.client.ping 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.state-store.sql.idle-time-out 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.gpg.policy.generator.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.gpg.webapp.https.address 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.balancer.max-no-move-interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.minicluster.control-resource-monitoring 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.disk.balancer.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.fs.state-store.num-retries 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.uid.cache.secs 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.ha.automatic-failover.zk-base-path 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.speculative.speculative-cap-running-tasks 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.node-labels.am.allow-non-exclusive-allocation 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.du.reserved.calculator 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.block.id.layout.upgrade.threads 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for io.erasurecode.codec.native.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.client.load.resource-types.from-server 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.client.application-client-protocol.poll-timeout-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.oob.timeout-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.sharedcache.mode 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.hdfs-servers 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.epoch.range 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.gpg.subcluster.heartbeat.expiration-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.map.output.compress 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.token.service.use_ip 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.kms.client.encrypted.key.cache.num.refill.threads 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.edekcacheloader.interval.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.aux-services.mapreduce_shuffle.class 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.group.mapping.ldap.num.attempts.before.failover 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.du.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.read.uri.cache.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.zk.retry-interval-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.data.transfer.server.tcpnodelay 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.dir 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.http.client.failover.max.attempts 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.socket.send.buffer 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.block.write.locateFollowingBlock.retries 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.jvm.system-properties-to-log 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.enable.retrycache 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.encrypted-intermediate-data.buffer.kb 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.resource-plugins.gpu.docker-plugin.nvidia-docker-v1.endpoint 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.data.transfer.client.tcpnodelay 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.storage.policy.satisfier.mode 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.webapp.xfs-filter.xframe-options 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.reduce.memory.mb 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.caller.context.enabled 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.nodemanagers.heartbeat-interval-speedup-factor 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.qjournal.prepare-recovery.timeout.ms 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.router.deregister.subcluster.enabled 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.sensitive-config-keys 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.client.completion.pollinterval 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.secondary.http-address 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.router.interceptor.allow-partial-result.enable 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.webapp.https.address 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.retry.throttle.limit 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.permissions.allow.owner.set.quota 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.domainname.resolver.impl 21:37:31.597 [main] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://localhost:8080/ 21:37:31.598 [main] INFO org.apache.hadoop.mapreduce.Job - Running job: job_local1106899704_0001 21:37:31.601 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter set in config null 21:37:31.603 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@7c6442c2] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:613) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1736) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:31.611 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@2098d37d] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:613) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1737) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:31.612 [Thread-5] DEBUG org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory - Looking for committer factory for path hdfs://192.168.88.101:8020/output 21:37:31.612 [Thread-5] DEBUG org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory - No scheme-specific factory defined in mapreduce.outputcommitter.factory.scheme.hdfs 21:37:31.612 [Thread-5] INFO org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 21:37:31.613 [Thread-5] DEBUG org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory - Creating FileOutputCommitter for path hdfs://192.168.88.101:8020/output and context TaskAttemptContextImpl{JobContextImpl{jobId=job_local1106899704_0001}; taskId=attempt_local1106899704_0001_m_000000_0, status=''} 21:37:31.613 [Thread-5] DEBUG org.apache.hadoop.mapreduce.lib.output.PathOutputCommitter - Instantiating committer FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_local1106899704_0001}; taskId=attempt_local1106899704_0001_m_000000_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@9f9d7ce}; outputPath=null, workPath=null, algorithmVersion=0, skipCleanup=false, ignoreCleanupFailures=false} with output path hdfs://192.168.88.101:8020/output and job context TaskAttemptContextImpl{JobContextImpl{jobId=job_local1106899704_0001}; taskId=attempt_local1106899704_0001_m_000000_0, status=''} 21:37:31.614 [Thread-5] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 2 21:37:31.614 [Thread-5] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 21:37:31.615 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 21:37:31.622 [Thread-5] DEBUG org.apache.hadoop.fs.statistics.impl.IOStatisticsContextIntegration - Created instance IOStatisticsContextImpl{id=2, threadId=32, ioStatistics=counters=(); gauges=(); minimums=(); maximums=(); means=(); } 21:37:31.629 [Thread-5] DEBUG org.apache.hadoop.hdfs.DFSClient - /output/_temporary/0: masked={ masked: rwxr-xr-x, unmasked: rwxrwxrwx } 21:37:31.637 [IPC Parameter Sending Thread for xxjdxnj/192.168.88.101:8020] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from СIPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С sending #3 org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs 21:37:31.649 [IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С got value #3 21:37:31.654 [Thread-5] DEBUG org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking call #3 ClientNamenodeProtocolTranslatorPB.mkdirs over null. Not retrying because try once and fail. org.apache.hadoop.ipc.RemoteException: Permission denied: user=С, access=WRITE, inode="/":hadoop:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:661) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:501) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:525) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:395) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1964) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1945) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1904) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3531) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1173) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:750) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1246) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1169) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3203) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1584) at org.apache.hadoop.ipc.Client.call(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1426) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139) at jdk.proxy2/jdk.proxy2.$Proxy11.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.lambda$mkdirs$20(ClientNamenodeProtocolTranslatorPB.java:611) at org.apache.hadoop.ipc.internal.ShadedProtobufHelper.ipc(ShadedProtobufHelper.java:160) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:611) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:437) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:170) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:162) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:100) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:366) at jdk.proxy2/jdk.proxy2.$Proxy12.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2555) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2531) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1497) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1494) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1511) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1486) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2494) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:356) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:541) 21:37:31.663 [IPC Parameter Sending Thread for xxjdxnj/192.168.88.101:8020] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from СIPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С sending #4 org.apache.hadoop.hdfs.protocol.ClientProtocol.delete 21:37:31.674 [IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С got value #4 21:37:31.675 [Thread-5] DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine2 - Call: delete took 12ms 21:37:31.678 [Thread-5] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local1106899704_0001 org.apache.hadoop.security.AccessControlException: Permission denied: user=С, access=WRITE, inode="/":hadoop:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:661) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:501) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:525) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:395) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1964) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1945) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1904) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3531) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1173) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:750) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1246) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1169) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3203) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2557) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2531) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1497) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1494) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1511) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1486) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2494) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:356) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:541) Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied: user=С, access=WRITE, inode="/":hadoop:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:661) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:501) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:525) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:395) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1964) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1945) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1904) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3531) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1173) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:750) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1246) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1169) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3203) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1584) at org.apache.hadoop.ipc.Client.call(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1426) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139) at jdk.proxy2/jdk.proxy2.$Proxy11.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.lambda$mkdirs$20(ClientNamenodeProtocolTranslatorPB.java:611) at org.apache.hadoop.ipc.internal.ShadedProtobufHelper.ipc(ShadedProtobufHelper.java:160) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:611) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:437) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:170) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:162) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:100) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:366) at jdk.proxy2/jdk.proxy2.$Proxy12.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2555) ... 9 common frames omitted 21:37:31.683 [Thread-5] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.fs.FileContext$2@15fc336f] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:343) at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:465) at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:442) at org.apache.hadoop.fs.FileContext.getLocalFSFileContext(FileContext.java:428) at org.apache.hadoop.mapred.LocalDistributedCacheManager.close(LocalDistributedCacheManager.java:268) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:598) 21:37:32.626 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@77b9d0c7] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isUber(Job.java:1866) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1747) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.626 [main] INFO org.apache.hadoop.mapreduce.Job - Job job_local1106899704_0001 running in uber mode : false 21:37:32.628 [main] INFO org.apache.hadoop.mapreduce.Job - map 0% reduce 0% 21:37:32.628 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$6@3b0ee03a] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.getTaskCompletionEvents(Job.java:730) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1759) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.629 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@796065aa] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:613) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1736) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.629 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@28a6301f] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:613) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1737) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.630 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$6@2c306a57] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.getTaskCompletionEvents(Job.java:730) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1759) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.630 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@773e2eb5] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:613) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1736) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.631 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@d8948cd] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isSuccessful(Job.java:625) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1763) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.631 [main] INFO org.apache.hadoop.mapreduce.Job - Job job_local1106899704_0001 failed with state FAILED due to: NA 21:37:32.631 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$8@7abe27bf] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:818) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1770) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.651 [main] INFO org.apache.hadoop.mapreduce.Job - Counters: 0 21:37:32.651 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@2679311f] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isSuccessful(Job.java:625) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1710) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.653 [shutdown-hook-0] DEBUG org.apache.hadoop.fs.FileSystem - FileSystem.close() by method: org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1530)); Key: (С (auth:SIMPLE))@hdfs://192.168.88.101:8020; URI: hdfs://192.168.88.101:8020; Object Identity Hash: 2e075efe 21:37:32.653 [shutdown-hook-0] DEBUG org.apache.hadoop.ipc.Client - stopping client from cache: Client-e9ac678cebb441d58dd3dc3f8f54b798 21:37:32.654 [shutdown-hook-0] DEBUG org.apache.hadoop.ipc.Client - removing client from cache: Client-e9ac678cebb441d58dd3dc3f8f54b798 21:37:32.654 [shutdown-hook-0] DEBUG org.apache.hadoop.ipc.Client - stopping actual client because no more references remain: Client-e9ac678cebb441d58dd3dc3f8f54b798 21:37:32.654 [shutdown-hook-0] DEBUG org.apache.hadoop.ipc.Client - Stopping client 21:37:32.655 [IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С: closed 21:37:32.655 [IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С: stopped, remaining connections 0 21:37:32.655 [shutdown-hook-0] DEBUG org.apache.hadoop.fs.FileSystem - FileSystem.close() by method: org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:529)); Key: (С (auth:SIMPLE))@file://; URI: file:///; Object Identity Hash: 2a38dfe6 21:37:32.655 [shutdown-hook-0] DEBUG org.apache.hadoop.fs.FileSystem - FileSystem.close() by method: org.apache.hadoop.fs.RawLocalFileSystem.close(RawLocalFileSystem.java:895)); Key: null; URI: file:///; Object Identity Hash: 6f3a54c5 21:37:32.656 [shutdown-hook-0] DEBUG org.apache.hadoop.hdfs.KeyProviderCache - Invalidating all cached KeyProviders. 21:37:32.656 [Thread-1] DEBUG org.apache.hadoop.util.ShutdownHookManager - Completed shutdown in 0.004 seconds; Timeouts: 0 21:37:32.664 [Thread-1] DEBUG org.apache.hadoop.util.ShutdownHookManager - ShutdownHookManager completed shutdown. Process finished with exit code 1
06-22
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值