关于rescan-scsi-bus.sh脚本的来历和使用注意事项

本文详细介绍了rescan-scsi-bus.sh脚本的安装与使用方法,并强调了在运行数据库的操作系统上使用该脚本时的注意事项,特别是避免在-i选项下使用以防止意外情况发生。


关于rescan-scsi-bus.sh脚本的来历和使用注意事项


1.rescan-scsi-bus.sh在sg3_utils-1.28-8.el6.x86_64中,如下试验:


安装sg3_utils-1.28-8.el6.x86_64 包.
[root@localhost repodata]# yum install sg3_utils*
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Setting up Install Process
rhel-source                                                                                                                                                             | 4.1 kB     00:00 ... 
Package sg3_utils-libs-1.28-8.el6.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package sg3_utils.x86_64 0:1.28-8.el6 will be installed
--> Finished Dependency Resolution


Dependencies Resolved


===============================================================================================================================================================================================
 Package                                       Arch                                       Version                                        Repository                                       Size
===============================================================================================================================================================================================
Installing:
 sg3_utils                                     x86_64                                     1.28-8.el6                                     rhel-source                                     500 k


Transaction Summary
===============================================================================================================================================================================================
Install       1 Package(s)


Total download size: 500 k
Installed size: 1.3 M
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : sg3_utils-1.28-8.el6.x86_64                                                                                                                                                 1/1 
  Verifying  : sg3_utils-1.28-8.el6.x86_64                                                                                                                                                 1/1 


Installed:
  sg3_utils.x86_64 0:1.28-8.el6                                                                                                                                                                


Complete!
[root@localhost repodata]# 

[root@localhost repodata]# yum provides /usr/bin/rescan-scsi-bus.sh
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
sg3_utils-1.28-8.el6.x86_64 : Utilities for devices that use SCSI command sets--->>>>在sg3_utils-1.28-8.el6.x86_64中.
Repo        : rhel-source
Matched from:
Filename    : /usr/bin/rescan-scsi-bus.sh


sg3_utils-1.28-8.el6.x86_64 : Utilities for devices that use SCSI command sets
Repo        : installed
Matched from:
Other       : Provides-match: /usr/bin/rescan-scsi-bus.sh


[root@localhost repodata]# 


2.对有数据库在运行的操作系统,禁止使用rescan-scsi-bus.sh 的-i选项来运行rescan-scsi-bus.sh,原因见:
	Executing rescan-scsi-bus.sh -i ( LIP flag ) caused RAC Node to Hang (Doc ID 1645143.1)
What is LIP ( Loop Initialization Protocol ) ?
LIP scans the interconnect and causes the SCSI layer to be updated to reflect the devices currently on the bus.
A LIP is, essentially, a bus reset, and will cause device addition and removal. 
This procedure is necessary to configure  a new SCSI target on a Fibre Channel interconnect. 
Bear in mind that issue_lip is an asynchronous operation. The command may complete before the entire scan has completed.


What LIP Reset may cause?

Loop Initialization Protocol method to scan the HBA’s can cause delay 
and I/O timeouts if the HBA/Device is in use and can also remove devices unexpectedly. 
Hence performing the scan using this method is not recommended on any production server where the SAN Devices are already configured in use. 
This type of scan is recommended on a newly built server to scan all the LUNS/Devices.
LIP is normally executed when Server boot.

3.Running "rescan-scsi-bus.sh -r" Can Remove Online Devices (Doc ID 2131357.1)



21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.gpg.application.cleaner.interval-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.output.fileoutputformat.compress 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.gpg.subcluster.cleaner.interval-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.sharedcache.store.in-memory.staleness-period-mins 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.write.byte-array-manager.count-limit 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.runtime.linux.runc.layer-mounts-to-keep 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.group.mapping.providers.combined 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.running.map.limit 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.webapp.address 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.placement-constraints.scheduler.pool-size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.multipart.size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.slow.io.warning.threshold.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.app.mapreduce.am.job.committer.commit-window 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.submithostname 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.edits.asynclogging 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.blockreport.incremental.intervalMsec 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.ifile.readahead 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.state-store.sql.conn-time-out 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.runtime.linux.runc.image-tag-to-manifest-plugin 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.socketcache.capacity 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.select.input.csv.field.delimiter 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.retry.policy.spec 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.reencrypt.batch.size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.connection.ssl.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.proxyuser.hadoop.hosts 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.read.considerLoad 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.max.slowdisks.to.exclude 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.groups.cache.secs 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.peer.stats.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.replication 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.storage.policy.satisfier.work.multiplier.per.iteration 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.group.mapping.ldap.directory.search.timeout 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.checksum.combine.mode 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.storage.policy.satisfier.max.outstanding.paths 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.sleep-delay-before-sigkill.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.apps.cache.enable 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.automatic.close 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.reencrypt.edek.threads 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.disk-health-checker.disk-free-space-threshold.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.acls.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.short.circuit.replica.stale.threshold.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.health-checker.run-before-startup 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.send.qop.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.jobhistory.intermediate-done-dir 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.slowpeer.collect.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.server-defaults.validity.period.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.client.libjars.wildcard 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.storage.policy.satisfier.address 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.reduce.shuffle.input.buffer.percent 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.audit.loggers 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for io.serializations 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.dispatcher.print-thread-pool.keep-alive-time 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.http.cross-origin.allowed-methods 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.snapshot.capture.openfiles 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.qjournal.queued-edits.limit.mb 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.zk.acl 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.container.stderr.pattern 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.cluster.local.dir 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for ipc.[port_number].cost-provider.impl 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.kerberos.kinit.command 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.metrics.logger.period.seconds 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.viewfs.overload.scheme.target.abfss.impl 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.block.access.token.lifetime 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.delegation.token.max-lifetime 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.drop.cache.behind.writes 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.system-metrics-publisher.timeline-server-v1.enable-batch 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.remove.dead.datanode.batchnum 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.submission-preprocessor.file-refresh-interval-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.num.extra.edits.retained 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.block.placement.ec.classname 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for ipc.client.connect.max.retries.on.timeouts 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.client.resolve.topology.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.qjournal.http.open.timeout.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for ha.health-monitor.connect-retry-interval.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.edekcacheloader.initial.delay.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.rbf.observer.read.enable 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.failover.resolver.useFQDN 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for io.mapfile.bloom.size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.ftp.data.connection.mode 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client-write-packet-size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.app.mapreduce.shuffle.log.backups 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.kerberos.principal.pattern 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.webhdfs.socket.connect-timeout 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.scheduler.monitor.enable 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.proxyuser.hadoop.groups 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.select.output.csv.quote.character 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.task.stuck.timeout-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.authorization 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.timeline-service.version 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.am.liveness-monitor.expiry-interval-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.gpg.webapp.address 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.timeline-service.leveldb-timeline-store.path 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.reduce.slowstart.completedmaps 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.delegation.token.max-lifetime 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.ha.automatic-failover.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.socket.write.timeout 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.accesstime.precision 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.group.mapping.ldap.conversion.rule 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for io.mapfile.bloom.error.rate 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.webapp.rest-csrf.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.timeline-service.leveldb-state-store.path 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.scheduler.configuration.zk-store.parent-path 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for ipc.[port_number].backoff.enable 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.timeline-service.writer.flush-interval-seconds 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.posix.acl.inheritance.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.outliers.report.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.kms.client.encrypted.key.cache.low-watermark 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.top.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.retry.throttle.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.jobhistory.webapp.rest-csrf.custom-header 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.webapp.xfs-filter.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for ipc.identity-provider.impl 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.cached.conn.retry 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.submission-preprocessor.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.system.tags 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.runtime.linux.runc.image-tag-to-manifest-plugin.num-manifests-to-cache 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.least-load-policy-selector.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.numa-awareness.numactl.cmd 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.path.based.cache.refresh.interval.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.fs-limits.max-directory-items 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.ha.log-roll.period 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.distributed-scheduling.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.pmem.cache.recovery 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.minicluster.fixed.ports 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.storage.policy.satisfier.queue.limit 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.snapshot.filesystem.limit 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.resource.percentage-physical-cpu-limit 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.fs-limits.max-xattr-size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.blocks.per.postponedblocks.rescan 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.maintenance.replication.min 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.timeline-service.app-aggregation-interval-secs 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.max.op.size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.iostatistics.thread.level.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.reducer.unconditional-preempt.delay.sec 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.app.mapreduce.am.hard-kill-timeout-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.connection.ttl 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.storage.policy.permissions.superuser-only 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.df.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.cache.limit.max-single-resource-mb 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.assumed.role.session.duration 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.disk.balancer.block.tolerance.percent 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.webhdfs.netty.high.watermark 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.balance.max.concurrent.moves 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.log.delete.threshold 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.token.tracking.ids.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.assumed.role.credentials.provider 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.log-container-debug-info-on-error.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.kms.client.failover.sleep.max.millis 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.webapp.rest-csrf.custom-header 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.jobhistory.move.thread-count 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for io.compression.codec.zstd.level 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.timeline-service.http-authentication.simple.anonymous.allowed 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.provided.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.sharedcache.client-server.thread-count 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.scheduler.configuration.max.version 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.jobhistory.jobname.limit 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.dispatcher.print-events-info.threshold 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.decommission.blocks.per.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.qjournal.write-txns.timeout.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.subcluster-resolver.class 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.read-lock-reporting-threshold-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.task.timeout 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.resource.memory-mb 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.container-log-monitor.total-size-limit-bytes 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.fileoutputcommitter.algorithm.version 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.framework.name 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.router.clientrm.interceptor-class.pipeline 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.system-metrics-publisher.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.sharedcache.nested-level 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.dns.log-slow-lookups.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.jobhistory.webapp.https.address 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for file.client-write-packet-size 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for ipc.client.ping 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.state-store.sql.idle-time-out 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.gpg.policy.generator.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.gpg.webapp.https.address 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.balancer.max-no-move-interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.minicluster.control-resource-monitoring 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.disk.balancer.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.fs.state-store.num-retries 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.uid.cache.secs 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.ha.automatic-failover.zk-base-path 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.speculative.speculative-cap-running-tasks 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.node-labels.am.allow-non-exclusive-allocation 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.du.reserved.calculator 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.block.id.layout.upgrade.threads 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for io.erasurecode.codec.native.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.client.load.resource-types.from-server 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.client.application-client-protocol.poll-timeout-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.datanode.oob.timeout-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.sharedcache.mode 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.hdfs-servers 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.epoch.range 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.federation.gpg.subcluster.heartbeat.expiration-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.map.output.compress 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.token.service.use_ip 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.kms.client.encrypted.key.cache.num.refill.threads 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.edekcacheloader.interval.ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.aux-services.mapreduce_shuffle.class 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.group.mapping.ldap.num.attempts.before.failover 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.du.interval 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.read.uri.cache.enabled 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.zk.retry-interval-ms 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.data.transfer.server.tcpnodelay 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.dir 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.http.client.failover.max.attempts 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.socket.send.buffer 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.client.block.write.locateFollowingBlock.retries 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.jvm.system-properties-to-log 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.enable.retrycache 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.job.encrypted-intermediate-data.buffer.kb 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.nodemanager.resource-plugins.gpu.docker-plugin.nvidia-docker-v1.endpoint 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.data.transfer.client.tcpnodelay 21:37:31.523 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.storage.policy.satisfier.mode 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.webapp.xfs-filter.xframe-options 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.reduce.memory.mb 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.caller.context.enabled 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.nodemanagers.heartbeat-interval-speedup-factor 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.qjournal.prepare-recovery.timeout.ms 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.router.deregister.subcluster.enabled 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.security.sensitive-config-keys 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for mapreduce.client.completion.pollinterval 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.namenode.secondary.http-address 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.router.interceptor.allow-partial-result.enable 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for yarn.resourcemanager.webapp.https.address 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for fs.s3a.retry.throttle.limit 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for dfs.permissions.allow.owner.set.quota 21:37:31.524 [main] DEBUG org.apache.hadoop.conf.Configuration - Handling deprecation for hadoop.domainname.resolver.impl 21:37:31.597 [main] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://localhost:8080/ 21:37:31.598 [main] INFO org.apache.hadoop.mapreduce.Job - Running job: job_local1106899704_0001 21:37:31.601 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter set in config null 21:37:31.603 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@7c6442c2] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:613) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1736) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:31.611 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@2098d37d] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:613) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1737) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:31.612 [Thread-5] DEBUG org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory - Looking for committer factory for path hdfs://192.168.88.101:8020/output 21:37:31.612 [Thread-5] DEBUG org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory - No scheme-specific factory defined in mapreduce.outputcommitter.factory.scheme.hdfs 21:37:31.612 [Thread-5] INFO org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 21:37:31.613 [Thread-5] DEBUG org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory - Creating FileOutputCommitter for path hdfs://192.168.88.101:8020/output and context TaskAttemptContextImpl{JobContextImpl{jobId=job_local1106899704_0001}; taskId=attempt_local1106899704_0001_m_000000_0, status=''} 21:37:31.613 [Thread-5] DEBUG org.apache.hadoop.mapreduce.lib.output.PathOutputCommitter - Instantiating committer FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_local1106899704_0001}; taskId=attempt_local1106899704_0001_m_000000_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@9f9d7ce}; outputPath=null, workPath=null, algorithmVersion=0, skipCleanup=false, ignoreCleanupFailures=false} with output path hdfs://192.168.88.101:8020/output and job context TaskAttemptContextImpl{JobContextImpl{jobId=job_local1106899704_0001}; taskId=attempt_local1106899704_0001_m_000000_0, status=''} 21:37:31.614 [Thread-5] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 2 21:37:31.614 [Thread-5] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 21:37:31.615 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 21:37:31.622 [Thread-5] DEBUG org.apache.hadoop.fs.statistics.impl.IOStatisticsContextIntegration - Created instance IOStatisticsContextImpl{id=2, threadId=32, ioStatistics=counters=(); gauges=(); minimums=(); maximums=(); means=(); } 21:37:31.629 [Thread-5] DEBUG org.apache.hadoop.hdfs.DFSClient - /output/_temporary/0: masked={ masked: rwxr-xr-x, unmasked: rwxrwxrwx } 21:37:31.637 [IPC Parameter Sending Thread for xxjdxnj/192.168.88.101:8020] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from СIPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С sending #3 org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs 21:37:31.649 [IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С got value #3 21:37:31.654 [Thread-5] DEBUG org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking call #3 ClientNamenodeProtocolTranslatorPB.mkdirs over null. Not retrying because try once and fail. org.apache.hadoop.ipc.RemoteException: Permission denied: user=С, access=WRITE, inode="/":hadoop:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:661) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:501) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:525) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:395) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1964) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1945) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1904) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3531) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1173) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:750) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1246) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1169) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3203) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1584) at org.apache.hadoop.ipc.Client.call(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1426) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139) at jdk.proxy2/jdk.proxy2.$Proxy11.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.lambda$mkdirs$20(ClientNamenodeProtocolTranslatorPB.java:611) at org.apache.hadoop.ipc.internal.ShadedProtobufHelper.ipc(ShadedProtobufHelper.java:160) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:611) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:437) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:170) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:162) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:100) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:366) at jdk.proxy2/jdk.proxy2.$Proxy12.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2555) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2531) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1497) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1494) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1511) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1486) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2494) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:356) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:541) 21:37:31.663 [IPC Parameter Sending Thread for xxjdxnj/192.168.88.101:8020] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from СIPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С sending #4 org.apache.hadoop.hdfs.protocol.ClientProtocol.delete 21:37:31.674 [IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С got value #4 21:37:31.675 [Thread-5] DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine2 - Call: delete took 12ms 21:37:31.678 [Thread-5] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local1106899704_0001 org.apache.hadoop.security.AccessControlException: Permission denied: user=С, access=WRITE, inode="/":hadoop:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:661) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:501) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:525) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:395) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1964) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1945) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1904) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3531) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1173) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:750) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1246) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1169) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3203) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2557) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2531) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1497) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1494) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1511) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1486) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2494) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:356) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:541) Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied: user=С, access=WRITE, inode="/":hadoop:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:661) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:501) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:525) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:395) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1964) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1945) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1904) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3531) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1173) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:750) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1246) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1169) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3203) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1584) at org.apache.hadoop.ipc.Client.call(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1426) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139) at jdk.proxy2/jdk.proxy2.$Proxy11.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.lambda$mkdirs$20(ClientNamenodeProtocolTranslatorPB.java:611) at org.apache.hadoop.ipc.internal.ShadedProtobufHelper.ipc(ShadedProtobufHelper.java:160) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:611) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:437) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:170) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:162) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:100) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:366) at jdk.proxy2/jdk.proxy2.$Proxy12.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2555) ... 9 common frames omitted 21:37:31.683 [Thread-5] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.fs.FileContext$2@15fc336f] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:343) at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:465) at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:442) at org.apache.hadoop.fs.FileContext.getLocalFSFileContext(FileContext.java:428) at org.apache.hadoop.mapred.LocalDistributedCacheManager.close(LocalDistributedCacheManager.java:268) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:598) 21:37:32.626 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@77b9d0c7] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isUber(Job.java:1866) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1747) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.626 [main] INFO org.apache.hadoop.mapreduce.Job - Job job_local1106899704_0001 running in uber mode : false 21:37:32.628 [main] INFO org.apache.hadoop.mapreduce.Job - map 0% reduce 0% 21:37:32.628 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$6@3b0ee03a] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.getTaskCompletionEvents(Job.java:730) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1759) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.629 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@796065aa] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:613) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1736) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.629 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@28a6301f] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:613) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1737) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.630 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$6@2c306a57] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.getTaskCompletionEvents(Job.java:730) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1759) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.630 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@773e2eb5] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:613) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1736) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.631 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@d8948cd] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isSuccessful(Job.java:625) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1763) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.631 [main] INFO org.apache.hadoop.mapreduce.Job - Job job_local1106899704_0001 failed with state FAILED due to: NA 21:37:32.631 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$8@7abe27bf] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:818) at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1770) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1698) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.651 [main] INFO org.apache.hadoop.mapreduce.Job - Counters: 0 21:37:32.651 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: С (auth:SIMPLE)][action: org.apache.hadoop.mapreduce.Job$1@2679311f] java.lang.Exception: null at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1950) at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:329) at org.apache.hadoop.mapreduce.Job.isSuccessful(Job.java:625) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1710) at cn.itcast.mr.dedup.MatrixMultiplication.main(MatrixMultiplication.java:128) 21:37:32.653 [shutdown-hook-0] DEBUG org.apache.hadoop.fs.FileSystem - FileSystem.close() by method: org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1530)); Key: (С (auth:SIMPLE))@hdfs://192.168.88.101:8020; URI: hdfs://192.168.88.101:8020; Object Identity Hash: 2e075efe 21:37:32.653 [shutdown-hook-0] DEBUG org.apache.hadoop.ipc.Client - stopping client from cache: Client-e9ac678cebb441d58dd3dc3f8f54b798 21:37:32.654 [shutdown-hook-0] DEBUG org.apache.hadoop.ipc.Client - removing client from cache: Client-e9ac678cebb441d58dd3dc3f8f54b798 21:37:32.654 [shutdown-hook-0] DEBUG org.apache.hadoop.ipc.Client - stopping actual client because no more references remain: Client-e9ac678cebb441d58dd3dc3f8f54b798 21:37:32.654 [shutdown-hook-0] DEBUG org.apache.hadoop.ipc.Client - Stopping client 21:37:32.655 [IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С: closed 21:37:32.655 [IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1759899303) connection to xxjdxnj/192.168.88.101:8020 from С: stopped, remaining connections 0 21:37:32.655 [shutdown-hook-0] DEBUG org.apache.hadoop.fs.FileSystem - FileSystem.close() by method: org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:529)); Key: (С (auth:SIMPLE))@file://; URI: file:///; Object Identity Hash: 2a38dfe6 21:37:32.655 [shutdown-hook-0] DEBUG org.apache.hadoop.fs.FileSystem - FileSystem.close() by method: org.apache.hadoop.fs.RawLocalFileSystem.close(RawLocalFileSystem.java:895)); Key: null; URI: file:///; Object Identity Hash: 6f3a54c5 21:37:32.656 [shutdown-hook-0] DEBUG org.apache.hadoop.hdfs.KeyProviderCache - Invalidating all cached KeyProviders. 21:37:32.656 [Thread-1] DEBUG org.apache.hadoop.util.ShutdownHookManager - Completed shutdown in 0.004 seconds; Timeouts: 0 21:37:32.664 [Thread-1] DEBUG org.apache.hadoop.util.ShutdownHookManager - ShutdownHookManager completed shutdown. Process finished with exit code 1
06-22
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值