Hive 配置实践:CentOS安装Hive,MySQL。并配置MySQL作为元数据库

一 安装Hive

官网链接:http://hive.apache.org/index.html

1.1 下载并解压到指定路径下

[root@localhost 下载]# tar -zxvf ./apache-hive-3.1.1-bin.tar.gz -C /usr/local
[root@localhost 下载]# cd /usr/local
[root@localhost local]# mv apache-hive-3.1.1-bin hive

1.2 配置环境变量

如无必要,不建议在root权限下进行操作

[root@localhost local]# su hadoop		#切换用户
[hadoop@localhost local]$ vim ~/.bashrc

~/.bashrc文件内添加如下内容

#Hive Environment Variables
export HIVE_HOME=/usr/local/hive
export PATH=$PATH:$HIVE_HOME/bin

保存退出,使用source ~/.bashrc命令使配置生效

1.3 编辑Hive的配置文件

/usr/local/hive/conf路径下的hive-default.xml.template文件更名为hive-default.xml

[hadoop@localhost conf]$ sudo mv hive-default.xml.template hive-default.xml

新建一个文件hive-site.xml

[hadoop@localhost conf]$ sudo touch hive-site.xml

添加如下内容:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
    <description>JDBC connect string for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
    <description>username to use against metastore database</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hive</value>
    <description>password to use against metastore database</description>
  </property>
</configuration>

二 安装MySQL

如果没有wget工具的话,使用yum install wget安装

1 下载MySQL Yum存储库

wget http://dev.mysql.com/get/mysql57-community-release-el7-11.noarch.rpm

2 安装MySQL Yum存储库

yum -y install mysql57-community-release-el7-11.noarch.rpm

3 查看结果

yum repolist enabled | grep mysql.*

在这里插入图片描述

2.1 安装MySQL 社区版

yum install mysql-community-server

在这里插入图片描述

2.2 启动MySQL

systemctl start  mysqld.service

在这里插入图片描述

systemctl status mysqld.service

在这里插入图片描述

2.3 初始化数据库密码

grep "password" /var/log/mysqld.log

在这里插入图片描述
登录

mysql -u root -p

在这里插入图片描述

ALTER USER 'root'@'localhost' IDENTIFIED BY '****************';

在这里插入图片描述

2.4 设置自动启动

systemctl enable mysqld

systemctl daemon-reload

2.5 下载并配置MySQL JDBC

解压到指定路径下

[hadoop@localhost 下载]$ sudo tar -zxvf ./mysql-connector-java-8.0.16.tar.gz -C /usr/local

mysql-connector-java-8.0.16/mysql-connector-java-8.0.16.jar复制到/usr/local/hive/bin路径下

[hadoop@localhost local]$ sudo cp mysql-connector-java-8.0.16/mysql-connector-java-8.0.16.jar /usr/local/hive/bin

进入MySQL shell

[hadoop@localhost local]$ mysql -u root -p
Enter password:

新建名为hive的数据库

mysql> create database hive;     #这个hive数据库与hive-site.xml中localhost:3306/hive的hive对应,用来保存hive元数据
Query OK, 1 row affected (0.00 sec)

mysql> grant all on *.* to hive@localhost identified by '********************';     #将所有数据库的所有表的所有权限赋给hive用户,后面的hive是配置hive-site.xml中配置的连接密码

mysql> flush privileges;     #刷新mysql系统权限关系表
Query OK, 0 rows affected (0.01 sec)

三 启动Hive

启动Hive之前先启动,Hadoop集群

[hadoop@localhost hadoop]$ start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [account.jetbrains.com]

[hadoop@localhost hadoop]$ start-yarn.sh
Starting resourcemanager
Starting nodemanagers

[hadoop@localhost conf]$ start-hbase.sh
localhost: running zookeeper, logging to /usr/local/hbase/logs/hbase-hadoop-zookeeper-localhost.localdomain.out
running master, logging to /usr/local/hbase/logs/hbase-hadoop-master-localhost.localdomain.out
: running regionserver, logging to /usr/local/hbase/logs/hbase-hadoop-regionserver-localhost.localdomain.out

[hadoop@localhost conf]$ hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = 6c4e6952-118a-4199-8f55-07f4d317e864

Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-3.1.1.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> 

完毕(~ ̄▽ ̄)~

问题及解决

因为使用过yum update命令,所以openjdk可能会被更新,所以在启动hadoop集群时若是提示

ERROR: JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.201.b09-2.el7_6.x86_64 does not exist.

可以使用这几个命令找到java的安装路径,然后重新配置一下环境变量即可。

[hadoop@localhost ~]$ which java
/usr/bin/java
[hadoop@localhost ~]$ ls -lrt /usr/bin/java
lrwxrwxrwx. 1 root root 22 6月  15 20:55 /usr/bin/java -> /etc/alternatives/java
[hadoop@localhost ~]$ ls -lrt /etc/alternatives/java
lrwxrwxrwx. 1 root root 73 6月  15 20:55 /etc/alternatives/java -> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre/bin/java
# java Environment variables
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64
export JAVA_INSTALL=$JAVA_HOME
export JAVA_COMMON_HOME=$JAVA_HOME/jre/bin/java

参考资料

http://dblab.xmu.edu.cn/blog/install-hive/

https://www.cnblogs.com/xiaopotian/p/8196464.html

https://blog.youkuaiyun.com/zhangxiaohui4445/article/details/86626517

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值