Hive安装
一、前提条件
1.已安装Hadoop
安装HADOOP
2.已安装Mysql
安装MySQL
二、安装步骤
1、下载apache-hive-3.1.2-bin.tar.gz至/software目录下,并解压
tar -xzvf apache-hive-3.1.2-bin.tar.gz
- 参数配置
vi /etc/profile
export HIVE_HOME=/software/apache-hive-3.1.2-bin/
export PATH=$PATH:/software/apache-hive-3.1.2-bin/bin/
更新配置
source /etc/profile
3.复制配置文件
cd /software/apache-hive-3.1.2-bin/conf
cp hive-env.sh.template hive-env.sh
cp hive-log4j2.properties.template hive-log4j2.properties
cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties
4.修改hive-env.sh文件,在/software/apache-hive-3.1.2-bin/conf下
export HIVE_CONF_DIR=/software/apache-hive-3.1.2-bin/conf
export HIVE_AUX_JARS_PATH=/software/apache-hive-3.1.2-bin/lib/
5.创建hive-site.xml文件,在/software/apache-hive-3.1.2-bin/conf下
其中Zh_123456为其他用户登录本机mysql的密码
<configuration>
<property>
<name>system:java.io.tmpdir</name>
<value>/software/data/hive/tmp</value>
</property>
<property>
<name>system:user.name</name>
<value>root</value>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/software/data/hive/tmp2</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/software/data/hive/warehouse</value>
</property>
<property>
<name>hive.querylog.location</name>
<value>/software/data/hive/log</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://namenode:3306/hive?createDatabaseIfNotExist=true&useSSL=false</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>Zh_123456</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value> </property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<!-- <property>
<name>hive.support.concurrency</name>
<value>true</value>
</property>
<property>
<name>hive.enforce.bucketing</name>
<value>true</value>
</property>
<property>
<name>hive.exec.dynamic.partition.mode</name>
<value>nonstrict</value>
</property>
<property>
<name>hive.txn.manager</name>
<value>org.apache.hadoop.hive.ql.lockmgr.DbTxnManager</value>
</property>
<property>
<name>hive.compactor.initiator.on</name>
<value>true</value>
</property>
<property>
<name>hive.compactor.worker.threads</name>
<value>1</value>
</property>
<property>
<name>hive.in.test</name>
<value>true</value>
</property>
-->
</configuration>
6.HDFS目录创建与权限设置
启动hadoop服务,并执行
hadoop fs -mkdir -p /user/hive/warehouse
hadoop fs -mkdir -p /user/hive/tmp
hadoop fs -mkdir -p /user/hive/log
hadoop fs -chmod -R 777 /user/hive/warehouse
hadoop fs -chmod -R 777 /user/hive/tmp
hadoop fs -chmod -R 777 /user/hive/log
7.Jar包一致性处理
将jar包变成一致的版本:删除hive中低版本jar包,将hadoop中高版本的复制到hive的lib中
rm -f /software/apache-hive-3.1.2-bin/lib/guava-19.0.jar
cp /software/hadoop-3.2.2/share/hadoop/common/lib/guava-27.0-jre.jar /software/apache-hive-3.1.2-bin/lib/
复制mysql.jar到hive的lib中
8.访问与测试
启动hadoop服务(hdfs和yarn)
输入命令hive
在hive命令行输入:create database test;
输入show databases;
有test数据库,则成功
9.访问与测试
开启远程访问服务:`
nohup hive --service hiveserver2 &`
本地访问:
beeline -u jdbc:hive2://localhost:10000 -n root
Web ui访问地址:Ip为主节点ip
http://192.168.1.3:10002/hiveserver2.jsp