关于org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.IllegalAccessError

本文介绍了解决HBase1.0.0与Elasticsearch2.3.3集成时遇到的guava版本冲突问题,提供了两种解决方案,一种是通过HTTP请求方式绕过冲突,另一种是使用maven-shade-plugin插件对冲突的jar包进行迁移。

  最近在项目中集成hbase1.0.0和ElasticSearch2.3.3时出现了问题,今天我把其中一个比较尴尬的问题写下来,log中错误:

org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator

  之前我在集成ES时,引入了架包com.google.guava(18.0版本),程序得以顺利执行。后来在集成HBase时发现它也用到了guava,且是12.0版本的。这时候问题来了,如果将原来的guava 18.0版本修改为16.0版本以下(HBase中用到的guava版本需在16.0以下版本,具体原因请看guava包中StopWatch的改变),因此HBase中就是因为版本太高而报错。如果改变guava版本使用16.0版本那么ES的javaAPI就会因为guava的版本太低而报错。怎么办呢?再次困扰我好久。
  于是乎查询了一下资料,发现在ES的官网上有一篇关于jar包冲突的blog。所以参考修改了一下,折腾一天终于搞定。这里提供两种解决方案:
方案一:
1、造成我出现上述的原因是由于我采用了ES较为常规的连接请求方式,如下:

Settings settings = Settings.settingsBuilder().put("cluster.name", "rube-es").put("client.transport.sniff", true).build();
TransportClient tClient = TransportClient.builder().settings(settings).build();
Client client = null;
String host = "";
client = tClient.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(host), 9300));

SearchResponse searchResponse = client.prepareSearch()
                    .setSearchType(SearchType.DEFAULT)
                    .setScroll(new TimeValue(60000))
                    .setIndices("indices")
                    .setTypes("type")
                    .setQuery(QueryBuilders.matchAllQuery())
                    .setFrom(0)
                    .setSize(500)
                    .execute()
                    .actionGet();

  因此造成以上问题,这里我们可以绕过此种方法去进行ES的请求,采用如下Http请求的方法:

AsyncHttpClientConfig.Builder builder = new AsyncHttpClientConfig.Builder();
builder.setCompressionEnabled(true).setAllowPoolingConnection(true);
builder.setRequestTimeoutInMs((int) TimeUnit.MINUTES.toMillis(1));
builder.setIdleConnectionTimeoutInMs((int) TimeUnit.MINUTES.toMillis(1));

AsyncHttpClient client = new AsyncHttpClient(builder.build());
ListenableFuture<Response> future = client.preparePost(url).addHeader("content-type", "application/json").setBody(queryStr.getBytes("UTF-8")).execute();

  这样就绕过了guava版本的问题,可以修改项目中的guava版本为16.0以下,暂且解决这个问题,但是如果绕不过去怎么办,请看下面的方案二。

方案二:
1、首先是参考es官网blog:https://www.elastic.co/blog/to-shade-or-not-to-shade ,新建一个maven项目进行如下配置:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>my.elasticsearch</groupId>
    <artifactId>es-shaded</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <elasticsearch.version>2.2.0</elasticsearch.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch</artifactId>
            <version>${elasticsearch.version}</version>
        </dependency>
        <dependency>
            <groupId>org.elasticsearch.plugin</groupId>
            <artifactId>shield</artifactId>
            <version>${elasticsearch.version}</version>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>2.4.1</version>
                <configuration>
                    <createDependencyReducedPom>false</createDependencyReducedPom>
                </configuration>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <relocations>
                                <relocation>
                                    <pattern>com.google.guava</pattern>
                                    <shadedPattern>my.elasticsearch.guava</shadedPattern>
                                </relocation>
                                <relocation>
                                    <pattern>org.joda</pattern>
                                    <shadedPattern>my.elasticsearch.joda</shadedPattern>
                                </relocation>
                                <relocation>
                                    <pattern>com.google.common</pattern>
                                    <shadedPattern>my.elasticsearch.common</shadedPattern>
                                </relocation>
                            </relocations>
                            <transformers>
                                <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer" />
                            </transformers>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

    <repositories>
        <repository>
            <id>elasticsearch-releases</id>
            <url>http://maven.elasticsearch.org/releases</url>
            <releases>
                <enabled>true</enabled>
                <updatePolicy>daily</updatePolicy>
            </releases>
            <snapshots>
                <enabled>false</enabled>
            </snapshots>
        </repository>
    </repositories>
</project>

  如上配置完成后,其实就是将org.joda等4个可能有冲突的jar包通过maven-shade-plugin插件迁移后重新打个jar包从而使得在引入这个jar包时能够使用该jar包自己的依赖而不是使用外部依赖。

2、然后install项目,看到build success的话表示打包成功,新的依赖包会在.m2文件的仓库中。下面回到原来的集成项目的pom文件中。引入该jar包: (注意这里需要排除掉ES 2.3.3的jar包,不然maven会将2.3.3的jar包打进去,造成冲突)

<dependency>
    <groupId>my.elasticsearch</groupId>
    <artifactId>es-shaded</artifactId>
    <version>1.0-SNAPSHOT</version>
    <exclusions>
        <exclusion>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch</artifactId>
        </exclusion>
    </exclusions>
</dependency>
Logging initialized using configuration in jar:file:/opt/hive/lib/hive-common-2.3.8.jar!/hive-log4j2.properties Async: true Exception in thread "org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@541eb088" java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor at org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:176) at java.lang.Thread.run(Thread.java:745) Exception in thread "main" java.lang.RuntimeException: java.net.ConnectException: Call From master/192.168.80.9 to master:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:610) at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:553) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:750) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232) Caused by: java.net.ConnectException: Call From master/192.168.80.9 to master:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1566) at org.apache.hadoop.ipc.Client.call(Client.java:1508) at org.apache.hadoop.ipc.Client.call(Client.java:1405) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:904) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1666) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1582) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1579) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1594) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1683) at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:708) at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:654) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:586) ... 9 more Caused by: java.net.ConnectException: 拒绝连接 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:699) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:812) at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 33 more
最新发布
12-30
[1] 5306 which: no hbase in (/opt/hive/bin:.:.:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/jdk/bin:/opt/hadoop/bin:/opt/hadoop/sbin:/opt/scala/bin:/opt/spark/bin:/opt/flume/bin:/opt/python//bin:/root/bin:/opt/jdk/bin:/opt/hadoop/bin:/opt/hadoop/sbin:/opt/scala/bin:/opt/spark/bin:/opt/flume/bin:/opt/python//bin) 2025-12-29 09:26:15: Starting Hive Metastore Server SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Logging initialized using configuration in jar:file:/opt/hive/lib/hive-common-2.3.8.jar!/hive-log4j2.properties Async: true Exception in thread "org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@d30b3fb" java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor at org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:176) at java.lang.Thread.run(Thread.java:745) Exception in thread "main" java.lang.RuntimeException: java.net.ConnectException: Call From master/192.168.80.9 to master:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:610) at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:553) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:750) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232) Caused by: java.net.ConnectException: Call From master/192.168.80.9 to master:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1566) at org.apache.hadoop.ipc.Client.call(Client.java:1508) at org.apache.hadoop.ipc.Client.call(Client.java:1405) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:904) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1666) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1582) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1579) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1594) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1683) at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:708) at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:654) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:586) ... 9 more Caused by: java.net.ConnectException: 拒绝连接 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:699) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:812) at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 33 more 解决方法
12-30
Exception in thread "main" org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Table should have at least one column family. Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:2215) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:2094) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1968) at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:624) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:362) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:350) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076) at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:594) at step1.Task.createTable(Task.java:21) at step1.Test1.main(Test1.java:39) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: Table should have at least one column family. Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:2215) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:2094) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1968) at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:624) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChanne 输入上面内容后报错
10-23
在使用 Java 操作 HBase 创建表时出现 `Table should have at least one column family` 异常,是因为 HBase 要求创建表时至少指定一个列族。要解决这个问题,可以在创建表时为表指定至少一个列族。 以下是修改后的代码示例: ```java package step1; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder; import org.apache.hadoop.hbase.client.TableDescriptor; import org.apache.hadoop.hbase.client.TableDescriptorBuilder; import org.apache.hadoop.hbase.util.Bytes; public class Task { public void createTable() throws Exception { Configuration config = HBaseConfiguration.create(); try (Connection connection = ConnectionFactory.createConnection(config); Admin admin = connection.getAdmin()) { createSingleTable(admin, "dept"); createSingleTable(admin, "emp"); } } private void createSingleTable(Admin admin, String tableNameStr) throws IOException { TableName tableName = TableName.valueOf(tableNameStr); if (!admin.tableExists(tableName)) { TableDescriptorBuilder tableDescriptorBuilder = TableDescriptorBuilder.newBuilder(tableName); ColumnFamilyDescriptor columnFamilyDescriptor = ColumnFamilyDescriptorBuilder.newBuilder(Bytes.toBytes("data")).build(); tableDescriptorBuilder.setColumnFamily(columnFamilyDescriptor); TableDescriptor tableDescriptor = tableDescriptorBuilder.build(); admin.createTable(tableDescriptor); } } } ``` 在上述代码中,`createSingleTable` 方法创建表时,通过 `ColumnFamilyDescriptorBuilder` 创建了一个名为 `data` 的列族,并将其添加到 `TableDescriptorBuilder` 中,从而避免了缺少列族的错误。 另一种方式是绕过完整性检查,但不建议这样做,因为列族是 HBase 数据组织的基础,缺少列族会影响表的正常使用。若要绕过完整性检查,可以在配置中设置 `hbase.table.sanity.checks` 为 `false`,示例代码如下: ```java Configuration config = HBaseConfiguration.create(); config.set("hbase.table.sanity.checks", "false"); ```
评论 2
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值