hbase-server-0.98.6-hadoop2.jar下方法测试失败

本文记录了使用 HBase RowCounter 过程中遇到的问题及解决过程,包括缺失 jar 包导致的错误和校园网环境引发的异常。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

先说失败原因:我用了校园网

[beifeng@hadoop hbase-0.98.6-hadoop2]$ /opt/modules/hadoop-2.7.3/bin/yarn jar lib/hbase-server-0.98.6-hadoop2.jar

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/filter/Filter
        at java.lang.Class.getDeclaredMethods0(Native Method)
        at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
        at java.lang.Class.getMethod0(Class.java:2813)
        at java.lang.Class.getMethod(Class.java:1663)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.<init>(ProgramDriver.java:59)
        at org.apache.hadoop.util.ProgramDriver.addClass(ProgramDriver.java:103)
        at org.apache.hadoop.hbase.mapreduce.Driver.main(Driver.java:39)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.filter.Filter
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        ... 13 more

缺少jar包,导入

[beifeng@hadoop hbase-0.98.6-hadoop2]$ export HBASE_HOME=/opt/modules/hbase-0.98.6-hadoop2
[beifeng@hadoop hbase-0.98.6-hadoop2]$ export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:`$HBASE_HOME/bin/hbase mapredcp`

再运行

[beifeng@hadoop hbase-0.98.6-hadoop2]$ /opt/modules/hadoop-2.7.3/bin/yarn jar lib/hbase-server-0.98.6-hadoop2.jar
An example program must be given as the first argument.
Valid program names are:
  CellCounter: Count cells in HBase table
  completebulkload: Complete a bulk data load.
  copytable: Export a table from local cluster to peer cluster
  export: Write table data to HDFS.
  import: Import data written by Export.
  importtsv: Import data in TSV format.
  rowcounter: Count rows in HBase table
  verifyrep: Compare the data from tables in two different clusters. WARNING: It doesn't work for incrementColumnValues'd cells since the timestamp is changed after being appended to the log.

测试rowcounter

[beifeng@hadoop hbase-0.98.6-hadoop2]$ /opt/modules/hadoop-2.7.3/bin/yarn jar lib/hbase-server-0.98.6-hadoop2.jar rowcounter hadoop19:tb1
17/12/26 18:43:26 INFO client.RMProxy: Connecting to ResourceManager at hadoop/192.168.246.100:8032
17/12/26 18:43:28 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x772c59a3 connecting to ZooKeeper ensemble=localhost:2181
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:host.name=hadoop
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_67
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/modules/jdk1.7.0_67/jre
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/modules/hadoop-2.7.3/etc/hadoop:/opt/modules/hadoop-2.7.3/etc/hadoop:/opt/modules/hadoop-2.7.3/etc/hadoop:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar::/opt/modules/hbase-0.98.6-hadoop2/lib/hbase-common-0.98.6-hadoop2.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/protobuf-java-2.5.0.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/hbase-client-0.98.6-hadoop2.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/hbase-hadoop-compat-0.98.6-hadoop2.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/hbase-server-0.98.6-hadoop2.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/hbase-protocol-0.98.6-hadoop2.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/high-scale-lib-1.1.1.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/zookeeper-3.4.5.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/guava-12.0.1.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/htrace-core-2.04.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/netty-3.6.6.Final.jar:/contrib/capacity-scheduler/*.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/modules/hadoop-2.7.3/lib/native
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.el6.x86_64
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:user.name=beifeng
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/beifeng
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/modules/hbase-0.98.6-hadoop2
17/12/26 18:43:28 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x772c59a3, quorum=localhost:2181, baseZNode=/hbase
17/12/26 18:43:28 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
17/12/26 18:43:28 INFO zookeeper.ClientCnxn: Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
17/12/26 18:43:29 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x16091e10a550009, negotiated timeout = 40000
17/12/26 18:43:29 INFO util.RegionSizeCalculator: Calculating region sizes for table "hadoop19:tb1".
17/12/26 18:43:29 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/beifeng/.staging/job_1514275883235_0002
Exception in thread "main" java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hbase.mapreduce.Driver.main(Driver.java:54)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.NullPointerException
        at org.apache.hadoop.net.DNS.reverseDns(DNS.java:92)
        at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.reverseDNS(TableInputFormatBase.java:228)
        at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:191)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
        at org.apache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.java:191)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
        at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
        at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:152)
        ... 11 more

断开校园网,或者连其他网络后重新运行,成功!

[beifeng@hadoop hbase-0.98.6-hadoop2]$ /opt/modules/hadoop-2.7.3/bin/yarn jar lib/hbase-server-0.98.6-hadoop2.jar rowcounter hadoop19:tb1
17/12/26 16:15:32 INFO client.RMProxy: Connecting to ResourceManager at hadoop/192.168.246.100:8032
17/12/26 16:15:35 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x46728703 connecting to ZooKeeper ensemble=localhost:2181
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:host.name=hadoop
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_67
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/modules/jdk1.7.0_67/jre
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/modules/hadoop-2.7.3/etc/hadoop:/opt/modules/hadoop-2.7.3/etc/hadoop:/opt/modules/hadoop-2.7.3/etc/hadoop:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/opt/modules/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar::/opt/modules/hbase-0.98.6-hadoop2/lib/hbase-common-0.98.6-hadoop2.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/protobuf-java-2.5.0.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/hbase-client-0.98.6-hadoop2.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/hbase-hadoop-compat-0.98.6-hadoop2.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/hbase-server-0.98.6-hadoop2.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/hbase-protocol-0.98.6-hadoop2.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/high-scale-lib-1.1.1.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/zookeeper-3.4.5.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/guava-12.0.1.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/htrace-core-2.04.jar:/opt/modules/hbase-0.98.6-hadoop2/lib/netty-3.6.6.Final.jar:/contrib/capacity-scheduler/*.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/modules/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/modules/hadoop-2.7.3/lib/native
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.el6.x86_64
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:user.name=beifeng
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/beifeng
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/modules/hbase-0.98.6-hadoop2
17/12/26 16:15:35 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x46728703, quorum=localhost:2181, baseZNode=/hbase
17/12/26 16:15:35 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
17/12/26 16:15:35 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
17/12/26 16:15:35 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x16091e10a550007, negotiated timeout = 40000
17/12/26 16:15:35 INFO util.RegionSizeCalculator: Calculating region sizes for table "hadoop19:tb1".
17/12/26 16:15:51 WARN mapreduce.TableInputFormatBase: Cannot resolve the host name for hadoop/192.168.246.100 because of javax.naming.CommunicationException: DNS error [Root exception is java.net.SocketTimeoutException: Receive timed out]; remaining name '100.246.168.192.in-addr.arpa'
17/12/26 16:15:51 INFO mapreduce.JobSubmitter: number of splits:1
17/12/26 16:15:51 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
17/12/26 16:15:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1514275883235_0001
17/12/26 16:15:52 INFO impl.YarnClientImpl: Submitted application application_1514275883235_0001
17/12/26 16:15:52 INFO mapreduce.Job: The url to track the job: http://hadoop:8088/proxy/application_1514275883235_0001/
17/12/26 16:15:52 INFO mapreduce.Job: Running job: job_1514275883235_0001
17/12/26 16:16:04 INFO mapreduce.Job: Job job_1514275883235_0001 running in uber mode : false
17/12/26 16:16:04 INFO mapreduce.Job:  map 0% reduce 0%
17/12/26 16:16:13 INFO mapreduce.Job:  map 100% reduce 0%
17/12/26 16:16:13 INFO mapreduce.Job: Job job_1514275883235_0001 completed successfully
17/12/26 16:16:14 INFO mapreduce.Job: Counters: 41
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=145962
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=70
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=1
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=0
        Job Counters 
                Launched map tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=6183
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=6183
                Total vcore-milliseconds taken by all map tasks=6183
                Total megabyte-milliseconds taken by all map tasks=6331392
        Map-Reduce Framework
                Map input records=2
                Map output records=0
                Input split bytes=70
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=177
                CPU time spent (ms)=2320
                Physical memory (bytes) snapshot=130764800
                Virtual memory (bytes) snapshot=872050688
                Total committed heap usage (bytes)=23855104
        HBase Counters
                BYTES_IN_REMOTE_RESULTS=0
                BYTES_IN_RESULTS=69
                MILLIS_BETWEEN_NEXTS=345
                NOT_SERVING_REGION_EXCEPTION=0
                NUM_SCANNER_RESTARTS=0
                REGIONS_SCANNED=1
                REMOTE_RPC_CALLS=0
                REMOTE_RPC_RETRIES=0
                RPC_CALLS=3
                RPC_RETRIES=0
        org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper$Counters
                ROWS=2
        File Input Format Counters 
                Bytes Read=0
        File Output Format Counters 
                Bytes Written=0

原因不明,校园网都需要固定的ip和DNS,难道和这个有关系?


java.lang.Exception: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile Trailer from file hdfs://192.168.8.201:8020/apps/hbase/data/data/gt_dw/profile_gid_lbs_locvalue/7ef0422f73082b2d140d755a08ab6904/lbs/75c83b238e0b4be496eecf33eed5e5c3     at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) ~[hadoop-mapreduce-client-common-2.7.2.jar:na]     at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) ~[hadoop-mapreduce-client-common-2.7.2.jar:na] Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile Trailer from file hdfs://192.168.8.201:8020/apps/hbase/data/data/gt_dw/profile_gid_lbs_locvalue/7ef0422f73082b2d140d755a08ab6904/lbs/75c83b238e0b4be496eecf33eed5e5c3     at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:463) ~[hbase-server-0.98.13-hadoop2.jar:0.98.13-hadoop2]     at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:506) ~[hbase-server-0.98.13-hadoop2.jar:0.98.13-hadoop2]     at com.glab.fz.etl.hfile.util.HFileInputFormat$HFileRecordReader.initialize(HFileInputFormat.java:60) ~[classes/:na]     at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548) ~[hadoop-mapreduce-client-core-2.7.2.jar:na]     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) ~[hadoop-mapreduce-client-core-2.7.2.jar:na]     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) ~[hadoop-mapreduce-client-core-2.7.2.jar:na]     at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) ~[hadoop-mapreduce-client-common-2.7.2.jar:na]     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_431]     at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_431]     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_431]     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_431]     at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_431] Caused by: java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support.     at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65) ~[hadoop-common-2.7.2.jar:na]     at org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:193) ~[hadoop-common-2.7.2.jar:na]     at org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:178) ~[hadoop-common-2.7.2.jar:na]     at org.apache.hadoop.hbase.io.compress.Compression$Algorithm.getDecompressor(Compression.java:327) ~[hbase-common-0.98.13-hadoop2.jar:0.98.13-hadoop2]     at org.apache.hadoop.hbase.io.compress.Compression.decompress(Compression.java:422) ~[hbase-common-0.98.13-hadoop2.jar:0.98.13-hadoop2]     at org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultDecodingContext.prepareDecoding(HFileBlockDefaultDecodingContext.java:91) ~[hbase-common-0.98.13-hadoop2.jar:0.98.13-hadoop2]     at org.apache.hadoop.hbase.io.hfile.HFileBlock.unpack(HFileBlock.java:507) ~[hbase-server-0.98.13-hadoop2.jar:0.98.13-hadoop2]     at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1255) ~[hbase-server-0.98.13-hadoop2.jar:0.98.13-hadoop2]     at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1261) ~[hbase-server-0.98.13-hadoop2.jar:0.98.13-hadoop2]     at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:147) ~[hbase-server-0.98.13-hadoop2.jar:0.98.13-hadoop2]     at org.apache.hadoop.hbase.io.hfile.HFileReaderV3.<init>(HFileReaderV3.java:73) ~[hbase-server-0.98.13-hadoop2.jar:0.98.13-hadoop2]     at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:453) ~[hbase-server-0.98.13-hadoop2.jar:0.98.13-hadoop2]
最新发布
06-16
<think>我们正在解决HBase中遇到的HFile损坏(CorruptHFileException)以及Snappy压缩库不可用(SnappyCodecnativesnappylibrarynotavailable)的问题。环境是HBase0.98.13-hadoop2Hadoop2.7.2。首先,我们需要分别处理这两个问题,但它们可能有关联,因为压缩问题可能导致HFile损坏。一、Snappy库不可用问题:错误信息:“nativesnappylibrarynotavailable”表示HBase无法加载Snappy的本地库。Snappy是一个压缩/解压缩库,HBase使用它来压缩存储。如果Snappy库配置不正确,可能导致读写HFile时出现异常,进而可能引发HFile损坏。解决方案步骤:1.确保系统已安装Snappy库:在Linux上,可以通过包管理器安装,例如:-Ubuntu/Debian:`sudoapt-getinstalllibsnappy1libsnappy-dev`-CentOS/RHEL:`sudoyuminstallsnappysnappy-devel`2.检查HadoopHBase的native库配置:HadoopHBase都需要使用native库(包括Snappy)。确保Hadoop的native库路径(通常是`$HADOOP_HOME/lib/native`)包含Snappy的本地库(如libsnappy.so.x.x)。同样,HBase也会使用这些库,但通常HBase会依赖Hadoop的native库。因此,首先确保Hadoop配置正确。3.配置Hadoop的native库路径:在`$HADOOP_HOME/etc/hadoop/hadoop-env.sh`中,设置:```bashexportHADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"```或者确保`$HADOOP_HOME/lib/native`在系统的`LD_LIBRARY_PATH`中。4.验证Hadoop的Snappy支持:运行Hadoop的检查命令:```bash$HADOOP_HOME/bin/hadoopchecknative```输出中应该显示snappy:true。如果为false,则说明配置不正确。5.配置HBase:在HBase的配置文件`hbase-env.sh`中,确保设置了`HBASE_LIBRARY_PATH`指向包含Snappy库的目录(通常和Hadoop的native库路径相同):```bashexportHBASE_LIBRARY_PATH=$HADOOP_HOME/lib/native```同时,确保HBase的配置文件中(如`hbase-site.xml`)启用了Snappy压缩:```xml<property><name>hbase.regionserver.codecs</name><value>snappy</value></property>```6.重启HadoopHBase服务:配置完成后,重启HadoopHBase,使配置生效。二、HFile损坏问题(CorruptHFileException):如果HFile已经损坏,我们需要修复或删除它。但是,首先确保Snappy库问题已经解决,因为压缩库问题可能导致读写HFile时出现异常,从而造成损坏。1.使用HBase的hbck工具检查表:HBase提供了一个工具`hbck`(HBaseFsck)来检查和修复不一致。运行:```bash$HBASE_HOME/bin/hbasehbck```注意:在HBase0.98版本中,`hbck`可能功能有限,特别是对于HFile损坏的修复。2.定位损坏的HFile:通过异常日志可以找到损坏的HFile路径。通常,HFile位于HDFS上的HBase数据目录中(例如:`/hbase/data/<namespace>/<table>/<region>/<column_family>/<hfile>`)。3.尝试修复:如果损坏是由于压缩库不可用导致的,那么在解决Snappy问题后,可能可以正常读取。如果仍然损坏,可能需要删除该HFile。注意:删除HFile可能导致数据丢失!所以需要谨慎。4.使用HBase的HFile工具检查HFile:HBase提供了`hfile`工具来检查HFile的完整性:```bash$HBASE_HOME/bin/hbaseorg.apache.hadoop.hbase.io.hfile.HFile-v-f<hfile_path_in_hdfs>```如果工具报告损坏,且无法修复,则考虑删除。5.删除损坏的HFile:如果该HFile属于一个表,并且该表还在使用,那么删除后可能会导致该区域的数据缺失。因此,建议先备份损坏的HFile(从HDFS下载到本地),然后删除HDFS上的损坏文件。删除后,HBase可能会自动从其他副本恢复(如果启用了HDFS的副本机制),或者需要重新加载该区域。6.重新分配区域:删除损坏的HFile后,可能需要重新分配该区域。使用HBaseshell:```bashhbase>unassign'<region_name>'hbase>assign'<region_name>'```7.如果上述方法无效,考虑使用备份恢复:如果HBase配置了备份(如Snapshot),则可以从备份恢复表。注意事项:-在操作之前,确保备份重要数据。-在生产环境中,操作前应停止相关表的读写。综上所述,首先解决Snappy库不可用问题,然后处理HFile损坏问题。如果HFile损坏是由于Snappy库问题引起的,在解决库问题后可能自然解决;如果已经造成永久损坏,则需要修复或删除损坏的HFile。引用说明:[^1]:引用的打包运行命令与当前问题无关,可忽略。[^2]:提到HMaster的高可用配置,与当前问题无关。[^3]:提到HBase的海量数据存储特性,与当前问题无关。因此,我们主要根据HBaseHadoop的官方文档以及常见问题解决经验给出上述方案。生成相关问题:&sect;&sect;相关问题&sect;&sect;1.如何验证Hadoop的Snappy压缩是否配置成功?2.HBase中还有哪些常用的压缩算法?如何配置?3.除了hbck,HBase还有哪些工具可以用于修复数据不一致?4.如何防止HBase中的HFile损坏?5.HBase的Snappy压缩支持在哪些版本中可用?</think>### HBase HFile损坏及Snappy库不可用解决方案 #### 一、Snappy库不可用问题解决 错误信息 `native snappy library not available` 表示系统未正确加载Snappy本地库。 **解决步骤:** 1. **安装Snappy系统库** ```bash # CentOS/RHEL sudo yum install snappy snappy-devel # Ubuntu/Debian sudo apt-get install libsnappy1 libsnappy-dev ``` 2. **配置Hadoop原生库** - 在 `hadoop-env.sh` 添加: ```bash export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native" ``` - 验证安装: ```bash $HADOOP_HOME/bin/hadoop checknative # 输出应为:snappy: true ``` 3. **配置HBase原生库路径** 在 `hbase-env.sh` 添加: ```bash export HBASE_LIBRARY_PATH=$HADOOP_HOME/lib/native ``` 4. **添加HBase配置** 在 `hbase-site.xml` 中设置: ```xml <property> <name>hbase.regionserver.codecs</name> <value>snappy</value> </property> ``` 5. **重启服务** ```bash # 重启HadoopHBase $HADOOP_HOME/sbin/stop-dfs.sh $HADOOP_HOME/sbin/start-dfs.sh $HBASE_HOME/bin/stop-hbase.sh $HBASE_HOME/bin/start-hbase.sh ``` #### 二、HFile损坏问题解决(CorruptHFileException) 通常由压缩库问题或磁盘错误引起。 **解决步骤:** 1. **检查损坏文件位置** 从异常日志中获取损坏的HFile路径,例如: ``` /hbase/data/default/my_table/.../my_file.hfile ``` 2. **使用HFile工具验证** ```bash $HBASE_HOME/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f <hdfs_path> # 输出损坏详情 ``` 3. **修复/删除损坏文件** - **方法1:通过HBase修复** ```bash $HBASE_HOME/bin/hbase hbck -repair ``` - **方法2:手动处理(高风险)** ```bash # 备份后删除损坏文件 hdfs dfs -rm <hdfs_path> # 重新分配Region hbase shell> assign '<region_name>' ``` 4. **预防措施** - 启用HDFS校验和验证: ```xml <!-- hdfs-site.xml --> <property> <name>dfs.checksum.type</name> <value>CRC32C</value> </property> ``` - 定期执行HBase数据一致性检查: ```bash $HBASE_HOME/bin/hbase hbck -details ``` #### 三、版本兼容性注意事项 针对 `hbase-server-0.98.13-hadoop2` 和 `hadoop-2.7.2`: 1. 确保Snappy库版本匹配(推荐 `snappy-1.1.3+`) 2. 验证Hadoop原生库包含 `libsnappy.so.*`: ```bash ls $HADOOP_HOME/lib/native | grep snappy ``` 3. 若问题持续,尝试重新编译Hadoop原生库: ```bash cd $HADOOP_HOME-src mvn package -Pdist,native -DskipTests -Dtar ``` > **关键提示:** 操作前务必备份HBase元数据和HDFS数据。删除HFile可能导致数据丢失,建议优先通过 `hbck` 工具修复[^2]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值