Hive Failed to connect to the MetaStore Server 解决方案

本文介绍了在尝试使用MySQL作为Hive的MetaStore时遇到的连接问题及解决步骤。首先检查Hive的lib目录下是否有正确的MySQL JDBC驱动,然后确保在同一服务器上的MySQL服务器可以被Hive账户访问,清除user表中user字段为空的记录,并执行flush privileges。接着检查并正确配置hive-site.xml,特别是hive.metastore.uris的设置,清空不正确的value。分享了作者的hive-site.xml配置,建议新手只配置必要选项。

1、最近在自学hive。安装hive及使用hive自带数据库可以正常运行。但是呢我想使用mysql作为后台数据库。因此要进行一翻配置。根据网上的文档,配置后,使用命令

show databases

 时,报错。错误信息是:

Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

 再看hive.log 日志记录:

2013-06-01 11:52:16,379 WARN  hive.metastore (HiveMetaStoreClient.java:open(285)) - Failed to connect to the MetaStore Server...
2013-06-01 11:52:17,382 WARN  hive.metastore (HiveMetaStoreClient.java:open(285)) - Failed to connect to the MetaStore Server...
2013-06-01 11:52:18,383 WARN  hive.metastore (HiveMetaStoreClient.java:open(285)) - Failed to connect to the MetaStore Server...
2013-06-01 11:52:19,386 WARN  hive.metastore (HiveMetaStoreClient.java:open(285)) - Failed to connect to the MetaStore Server...
2013-06-01 11:52:20,387 WARN  hive.metastore (HiveMetaStoreClient.java:open(285)) - Failed to connect to the MetaStore Server...
2013-06-01 11:52:21,392 ERROR exec.Task (SessionState.java:printError(401)) - FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

 

日志信息可以看出是:不能connect to metaStore server的问题。

我的解决方案是:

1)先检查mysql 链接jdbc使用到的启动 JAR是否放到 hive/lib 下面。

2)在hive运行的服务器,使得mysql命令行,看能否成功链接到mysql服务器。我的hive有mysql运行在同一台机器上,使用root用户登陆mysql,创建帐户(hive),赋予权限后,hive 还是不能登录,根据错误信息,在网上找到的问题原因是:mysql库中的user表中存在user 为空的记录,解决方案是:删除user表中user字段为空的记录。最后不能要忘了运行:flush privileges 命令。确认hive账户可以在运行hive服务的机器上登录mysql服务器;

3)正确配置hive-site.xml。

4)我遇到的问题是:配置完 hive-site.xml后,运行 show tables ,同样报上述错误。再次检查hive-site.xml 配置,发现:

<property>
  <name>hive.metastore.uris</name>
  <value>thrift://xxxxxxxx</value>
  <description>Thrift uri for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>

 我不知道这个配置是干什么用的。果断清空value的值。再次运行show databases命令。成功!!!!

5)再次提醒读者朋友们:当你们的错误信息与上述错误相似时,请认真检查 hive.metastore.uris 的value 配置是否正确。建义新手不要配置。

最后与大家分享我的hive-site.xml配置:

<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://127.0.0.1:3306/metastore_db?createDatabaseIfNotExist=true</value>
  <description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
  <description>Driver class name for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>hive</value>
  <description>username to use against metastore database</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>a123</value>
  <description>password to use against metastore database</description>
</property>

 

新手只需要配置上述四处及可。

我弄过了,现在遇到问题了[root@master server]# [root@master server]# # 重命名为 hive(简洁) [root@master server]# mv apache-hive-3.1.2-bin hive [root@master server]# vi /etc/profile [root@master server]# source /etc/profile [root@master server]# echo $HIVE_HOME /export/server/hive [root@master server]# cd /export/server/hive/lib [root@master lib]# [root@master lib]# # 删除 Hive 自带的旧版 guava [root@master lib]# rm -f guava-19.0.jar [root@master lib]# [root@master lib]# # 从 Hadoop 复制新版(先查看真实版本号) [root@master lib]# ls /export/server/hadoop/share/hadoop/common/lib/guava-*.jar ls: 无法访问/export/server/hadoop/share/hadoop/common/lib/guava-*.jar: 没有那个文件或目录 [root@master lib]# cd /export/server/hive [root@master hive]# [root@master hive]# # 创建 conf 目录(如果不存在) [root@master hive]# mkdir -p conf [root@master hive]# [root@master hive]# # 进入配置目录 [root@master hive]# cd conf [root@master conf]# vi hive-site.xml [root@master conf]# cp /export/soft/mysql-connector-java-5.1.49.jar /export/server/hive/lib/ [root@master conf]# ls /export/server/hive/lib/mysql-connector-java-5.1.49.jar /export/server/hive/lib/mysql-connector-java-5.1.49.jar [root@master conf]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 5 Server version: 5.7.43 MySQL Community Server (GPL) Copyright (c) 2000, 2023, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> -- 创建 hive 数据库 mysql> CREATE DATABASE IF NOT EXISTS hive CHARACTER SET utf8 COLLATE utf8_general_ci; Query OK, 1 row affected (0.10 sec) mysql> mysql> -- 授权 root 用户访问 hive 库(因为我们用 root 连接) mysql> -- 注意:生产环境建议新建专用用户,这里简化操作 mysql> GRANT ALL PRIVILEGES ON hive.* TO 'root'@'%'; Query OK, 0 rows affected (0.07 sec) mysql> mysql> -- 刷新权限 mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.01 sec) mysql> mysql> -- 退出 mysql> EXIT; Bye [root@master conf]# cd /export/server/hive [root@master hive]# [root@master hive]# # 执行 schematool 初始化 [root@master hive]# bin/schematool -dbType mysql -initSchema -verbose SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/server/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/root/export/server/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338) at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:518) at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:536) at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:430) at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5141) at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5104) at org.apache.hive.beeline.HiveSchemaTool.<init>(HiveSchemaTool.java:96) at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1473) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232) [root@master hive]# # 创建日志目录 [root@master hive]# mkdir -p /export/server/hive/logs [root@master hive]# [root@master hive]# # 启动 Metastore(后台运行) [root@master hive]# nohup /export/server/hive/bin/hive --service metastore \ > > /export/server/hive/logs/metastore.log 2>&1 & [1] 124844 [root@master hive]# [root@master hive]# # 查看是否启动成功 [root@master hive]# tail -f /export/server/hive/logs/metastore.log nohup: 忽略输入 2025-11-26 22:50:08: Starting Hive Metastore Server SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/server/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/root/export/server/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] MetaException(message:com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8661) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8656) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:8926) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:8843) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232) Caused by: java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338) at org.apache.hadoop.hive.metastore.ObjectStore.correctAutoStartMechanism(ObjectStore.java:641) at org.apache.hadoop.hive.metastore.ObjectStore.getDataSourceProps(ObjectStore.java:572) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:349) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:59) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:718) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:696) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:690) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:767) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:538) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80) ... 11 more Exception in thread "main" MetaException(message:com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8661) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8656) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:8926) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:8843) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232) Caused by: java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338) at org.apache.hadoop.hive.metastore.ObjectStore.correctAutoStartMechanism(ObjectStore.java:641) at org.apache.hadoop.hive.metastore.ObjectStore.getDataSourceProps(ObjectStore.java:572) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:349) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:59) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:718) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:696) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:690) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:767) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:538) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80) ... 11 more ^C [1]+ 退出 1 nohup /export/server/hive/bin/hive --service metastore > /export/server/hive/logs/metastore.log 2>&1 [root@master hive]# # 启动 HiveServer2 [root@master hive]# nohup /export/server/hive/bin/hive --service hiveserver2 \ > > /export/server/hive/logs/hiveserver2.log 2>&1 & [1] 126010 [root@master hive]# [root@master hive]# # 查看日志 [root@master hive]# tail -f /export/server/hive/logs/hiveserver2.log nohup: 忽略输入 which: no hbase in (/usr/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/export/server/jdk/bin:/export/server/hadoop/bin:/export/server/hadoop/sbin:/export/server/mysql/bin:/export/server/hive/bin) 2025-11-26 22:50:53: Starting HiveServer2 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/server/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/root/export/server/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338) at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:518) at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:536) at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:430) at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5141) at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5099) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:97) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:81) at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1141) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232) ^C [1]+ 退出 1 nohup /export/server/hive/bin/hive --service hiveserver2 > /export/server/hive/logs/hiveserver2.log 2>&1 [root@master hive]# jps 96417 DataNode 96162 NameNode 126676 Jps 103240 ResourceManager 103483 NodeManager 96894 SecondaryNameNode [root@master hive]# /export/server/hive/bin/beeline SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/server/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/root/export/server/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Beeline version 3.1.2 by Apache Hive beeline> !connect jdbc:hive2://192.168.161.130:10000 Connecting to jdbc:hive2://192.168.161.130:10000 Enter username for jdbc:hive2://192.168.161.130:10000: root Enter password for jdbc:hive2://192.168.161.130:10000: 25/11/26 22:51:47 [main]: WARN jdbc.HiveConnection: Failed to connect to 192.168.161.130:10000 Could not open connection to the HS2 server. Please check the server URI and if the URI is correct, then ask the administrator to check the server status. Error: Could not open client transport with JDBC Uri: jdbc:hive2://192.168.161.130:10000: java.net.ConnectException: 拒绝连接 (Connection refused) (state=08S01,code=0) beeline> SHOW DATABASES; No current connection beeline> # 开放 Metastore 端口 beeline> firewall-cmd --permanent --add-port=9083/tcp . . . .> . . . .> # 开放 HiveServer2 端口 . . . .> firewall-cmd --permanent --add-port=10000/tcp . . . .> . . . .> # 重新加载 . . . .> firewall-cmd --reload . . . .> exit; No current connection beeline> !quit [root@master hive]#
最新发布
11-27
```markdown # 🛠️ Hive 安装问题深度诊断与修复方案 你已经完成了大部分配置,但遇到了核心错误: ``` java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument ``` 这个错误**反复出现在 `schematool` 初始化、MetastoreHiveServer2 启动过程**中,说明问题仍未解决。下面我们将一步步彻底修复。 --- ## 🔍 根本原因分析 你执行了: ```bash rm -f guava-19.0.jar ``` 但紧接着尝试从 Hadoop 复制新版本时失败了: ```bash ls /export/server/hadoop/share/hadoop/common/lib/guava-*.jar # 输出:没有那个文件或目录 ``` 这意味着:**你删除了 Hive 的 Guava 包,却没有补上正确的版本** → 导致 Hive 缺少必要的依赖! > ✅ 正确做法是:**必须找到 Hadoop 实际使用的 Guava 版本,并将其复制到 Hive 的 lib 目录下** --- ## ✅ 第一步:确认 Hadoop 使用的 Guava 版本 运行以下命令查找 Hadoop 中真实的 Guava 包: ```bash find /export/server/hadoop -name "guava*.jar" ``` 常见输出可能是: ``` /export/server/hadoop/share/hadoop/common/lib/guava-27.0-jre.jar ``` 或 ``` /export/server/hadoop/share/hadoop/common/lib/guava-19.0.jar ``` 📌 记住这个完整路径! --- ## ✅ 第二步:将正确的 Guava 包复制到 Hive 假设查到的结果是: ```bash /export/server/hadoop/share/hadoop/common/lib/guava-27.0-jre.jar ``` 执行: ```bash cd /export/server/hive/lib cp /export/server/hadoop/share/hadoop/common/lib/guava-27.0-jre.jar ./ ``` > ⚠️ 如果提示找不到该文件,请回到上一步确认路径! 验证是否复制成功: ```bash ls -l guava-*.jar ``` 你应该看到类似: ``` -rw-r--r-- 1 root root 2829776 Nov 26 22:40 guava-27.0-jre.jar ``` --- ## 🧼 第三步:清理残留元数据(重要!) 由于之前的初始化失败且可能部分写入数据,建议清空 MySQL 中的 Hive 元数据。 ### 3.1 停止所有 Hive 进程 ```bash pkill -f hive jps # 确认没有 HiveMetaStoreHiveServer2 ``` ### 3.2 清空 Hive 数据库 ```bash mysql -u root -p ``` 输入密码后执行: ```sql DROP DATABASE IF EXISTS hive; CREATE DATABASE hive CHARACTER SET utf8 COLLATE utf8_general_ci; GRANT ALL PRIVILEGES ON hive.* TO 'root'@'%' IDENTIFIED BY '123456'; FLUSH PRIVILEGES; EXIT; ``` --- ## 🔁 第四步:重新初始化元数据库 ```bash cd /export/server/hive bin/schematool -dbType mysql -initSchema -verbose ``` ✅ 成功标志: ``` Starting metastore schema initialization to 3.1.2 Initialization script completed schemaTool completed ``` ❌ 若仍报错,请贴出完整日志。 --- ## 🔥 第五步:正确启动 Metastore 服务 创建日志目录(如果不存在): ```bash mkdir -p /export/server/hive/logs ``` 启动 Metastore(后台运行): ```bash nohup /export/server/hive/bin/hive --service metastore \ > /export/server/hive/logs/metastore.log 2>&1 & ``` 查看日志是否启动成功: ```bash tail -f /export/server/hive/logs/metastore.log ``` ✅ 成功标志: ``` Hive Metastore starting stats collection Starting DB notification cleaner thread Started ABSTRACT_FMT_DELTA event processor! Started metastore ``` 按 `Ctrl+C` 退出日志观察。 --- ## 🚀 第六步:启动 HiveServer2 服务 ```bash nohup /export/server/hive/bin/hive --service hiveserver2 \ > /export/server/hive/logs/hiveserver2.log 2>&1 & ``` 查看日志: ```bash tail -f /export/server/hive/logs/hiveserver2.log ``` ✅ 成功标志: ``` Starting HiveServer2 HiveServer2 started successfully ``` > ⏳ 可能需要等待 10-30 秒才能完全启动。 --- ## 🔓 第七步:开放防火墙端口 ```bash firewall-cmd --permanent --add-port=9083/tcp # Metastore firewall-cmd --permanent --add-port=10000/tcp # HiveServer2 firewall-cmd --reload ``` --- ## 🧪 第八步:使用 Beeline 测试连接 ```bash /export/server/hive/bin/beeline ``` 在 Beeline 提示符下输入: ```bash !connect jdbc:hive2://192.168.161.130:10000 ``` - 用户名:`root` - 密码:`123456` 成功后执行: ```sql SHOW DATABASES; ``` 预期输出: ``` +----------------+--+ | database_name | +----------------+--+ | default | +----------------+--+ ``` --- ## ✅ 最终验证:创建测试表 ```sql CREATE TABLE test(id INT, name STRING); INSERT INTO test VALUES (1, 'hello'); SELECT * FROM test; ``` 如果能正常插入和查询,恭喜你!Hive 已经完全正常运行! --- ## 📚 知识点总结(不超过三项) 1. **Guava 版本兼容性** Hive 3.1.2 要求 Guava ≥ 27.0,低于此版本会引发 `NoSuchMethodError`。 2. **Hive 服务依赖关系** HiveServer2 依赖 Metastore 服务,必须先启动 Metastore 才能连接成功。 3. **元数据初始化机制** `schematool -initSchema` 会在 MySQL 中创建约 74 张表用于存储表结构等信息。 --- ```
评论 1
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值