就是最新的,每次都删掉了再把重新打包的放上去
2025-10-30 03:17:32,000 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/export/servers/hadoop-3.3.6/lib/native
2025-10-30 03:17:32,000 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2025-10-30 03:17:32,000 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2025-10-30 03:17:32,000 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
2025-10-30 03:17:32,001 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
2025-10-30 03:17:32,001 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-693.el7.x86_64
2025-10-30 03:17:32,001 INFO zookeeper.ZooKeeper: Client environment:user.name=root
2025-10-30 03:17:32,001 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
2025-10-30 03:17:32,001 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt
2025-10-30 03:17:32,001 INFO zookeeper.ZooKeeper: Client environment:os.memory.free=90MB
2025-10-30 03:17:32,001 INFO zookeeper.ZooKeeper: Client environment:os.memory.max=443MB
2025-10-30 03:17:32,001 INFO zookeeper.ZooKeeper: Client environment:os.memory.total=193MB
2025-10-30 03:17:32,006 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop01:2181,hadoop02:2181,hadoop03:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$21/1955935810@75aa6823
2025-10-30 03:17:32,022 INFO common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2025-10-30 03:17:32,034 INFO zookeeper.ClientCnxnSocket: jute.maxbuffer value is 1048575 Bytes
2025-10-30 03:17:32,046 INFO zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled=false
2025-10-30 03:17:32,058 INFO zookeeper.ClientCnxn: Opening socket connection to server hadoop03/192.168.150.133:2181.
2025-10-30 03:17:32,060 INFO zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error)
2025-10-30 03:17:32,070 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.150.131:44958, server: hadoop03/192.168.150.133:2181
2025-10-30 03:17:32,143 INFO zookeeper.ClientCnxn: Session establishment complete on server hadoop03/192.168.150.133:2181, session id = 0x300001297900006, negotiated timeout = 40000
2025-10-30 03:17:33,818 INFO zookeeper.ZooKeeper: Session: 0x300001297900006 closed
2025-10-30 03:17:33,818 INFO zookeeper.ClientCnxn: EventThread shut down for session: 0x300001297900006
2025-10-30 03:17:34,969 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1761763809659_0003
2025-10-30 03:17:38,258 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop01:2181,hadoop02:2181,hadoop03:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$21/1955935810@75aa6823
2025-10-30 03:17:38,264 INFO zookeeper.ClientCnxnSocket: jute.maxbuffer value is 1048575 Bytes
2025-10-30 03:17:38,264 INFO zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled=false
2025-10-30 03:17:38,265 INFO zookeeper.ClientCnxn: Opening socket connection to server hadoop01/192.168.150.131:2181.
2025-10-30 03:17:38,265 INFO zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error)
2025-10-30 03:17:38,266 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.150.131:36484, server: hadoop01/192.168.150.131:2181
2025-10-30 03:17:38,308 INFO zookeeper.ClientCnxn: Session establishment complete on server hadoop01/192.168.150.131:2181, session id = 0x10000139d9e0017, negotiated timeout = 40000
2025-10-30 03:17:38,316 INFO mapreduce.RegionSizeCalculator: Calculating region sizes for table "ns_ct:calllog".
2025-10-30 03:17:38,676 INFO util.log: Logging initialized @16480ms to org.eclipse.jetty.util.log.Slf4jLog
2025-10-30 03:17:38,849 INFO zookeeper.ZooKeeper: Session: 0x10000139d9e0017 closed
2025-10-30 03:17:38,849 INFO zookeeper.ClientCnxn: EventThread shut down for session: 0x10000139d9e0017
2025-10-30 03:17:38,861 INFO mapreduce.JobSubmitter: number of splits:4
2025-10-30 03:17:38,940 INFO Configuration.deprecation: yarn.resourcemanager.zk-address is deprecated. Instead, use hadoop.zk.address
2025-10-30 03:17:38,940 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2025-10-30 03:17:38,940 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2025-10-30 03:17:39,247 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1761763809659_0003
2025-10-30 03:17:39,247 INFO mapreduce.JobSubmitter: Executing with tokens: []
2025-10-30 03:17:39,508 INFO conf.Configuration: resource-types.xml not found
2025-10-30 03:17:39,508 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2025-10-30 03:17:39,881 INFO impl.YarnClientImpl: Submitted application application_1761763809659_0003
2025-10-30 03:17:39,934 INFO mapreduce.Job: The url to track the job: http://hadoop02:8088/proxy/application_1761763809659_0003/
2025-10-30 03:17:39,934 INFO mapreduce.Job: Running job: job_1761763809659_0003
2025-10-30 03:17:54,285 INFO mapreduce.Job: Job job_1761763809659_0003 running in uber mode : false
2025-10-30 03:17:54,291 INFO mapreduce.Job: map 0% reduce 0%
2025-10-30 03:18:16,346 INFO mapreduce.Job: map 25% reduce 0%
2025-10-30 03:18:19,853 INFO mapreduce.Job: map 75% reduce 0%
2025-10-30 03:18:20,864 INFO mapreduce.Job: map 100% reduce 0%
2025-10-30 03:18:36,007 INFO mapreduce.Job: map 100% reduce 100%
2025-10-30 03:18:36,017 INFO mapreduce.Job: Job job_1761763809659_0003 completed successfully
2025-10-30 03:18:36,238 INFO mapreduce.Job: Counters: 67
File System Counters
FILE: Number of bytes read=5964621
FILE: Number of bytes written=13523740
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=445
HDFS: Number of bytes written=0
HDFS: Number of read operations=4
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
HDFS: Number of bytes read erasure-coded=0
Job Counters
Killed map tasks=1
Launched map tasks=4
Launched reduce tasks=1
Data-local map tasks=3
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=91376
Total time spent by all reduces in occupied slots (ms)=16244
Total time spent by all map tasks (ms)=91376
Total time spent by all reduce tasks (ms)=16244
Total vcore-milliseconds taken by all map tasks=91376
Total vcore-milliseconds taken by all reduce tasks=16244
Total megabyte-milliseconds taken by all map tasks=93569024
Total megabyte-milliseconds taken by all reduce tasks=16633856
Map-Reduce Framework
Map input records=45242
Map output records=135690
Map output bytes=5693235
Map output materialized bytes=5964639
Input split bytes=445
Combine input records=0
Combine output records=0
Reduce input groups=7522
Reduce shuffle bytes=5964639
Reduce input records=135690
Reduce output records=7522
Spilled Records=271380
Shuffled Maps =4
Failed Shuffles=0
Merged Map outputs=4
GC time elapsed (ms)=2520
CPU time spent (ms)=27330
Physical memory (bytes) snapshot=1377050624
Virtual memory (bytes) snapshot=14049800192
Total committed heap usage (bytes)=1255784448
Peak Map Physical memory (bytes)=427880448
Peak Map Virtual memory (bytes)=2846461952
Peak Reduce Physical memory (bytes)=280657920
Peak Reduce Virtual memory (bytes)=2824417280
HBaseCounters
BYTES_IN_REMOTE_RESULTS=8861245
BYTES_IN_RESULTS=24679481
MILLIS_BETWEEN_NEXTS=18596
NOT_SERVING_REGION_EXCEPTION=0
REGIONS_SCANNED=4
REMOTE_RPC_CALLS=8
REMOTE_RPC_RETRIES=0
ROWS_FILTERED=0
ROWS_SCANNED=45242
RPC_CALLS=24
RPC_RETRIES=0
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=0
[root@hadoop01 opt]# jar tf ct_analysis1-1.0-SNAPSHOT-jar-with-dependencies.jar | grep UserRelationshipOutputFormat
outputformat/UserRelationshipOutputFormat$UserRelationshipRecordWriter.class
outputformat/UserRelationshipOutputFormat.class
[root@hadoop01 opt]#
最新发布