目录
准备工作
- hadoop-3.2.0
- mysql
部署规划
序号 | 服务器IP | 服务器主机名 | 安装软件 | 启动进程 |
1 | 192.168.21.131 | master | hadoop、zookeeper、hive-metastore、spark、hue | QuorumPeerMain、ResourceManager、JournalNode、DFSZKFailoverController、NameNode、Master |
2 | 192.168.21.132 | slave1 | hadoop、zookeeper、spark | QuorumPeerMain、JournalNode、DataNode、NodeManager、Worker |
3 | 192.168.21.133 | slave2 | hadoop、zookeeper、spark | QuorumPeerMain、JournalNode、DataNode、NodeManager、Worker |
4 | 192.168.21.134 | slave3 | hadoop、spark | DataNode、NodeManager、Worker |
5 | 192.168.21.135 | master2 | hadoop、hive-Client、spark | ResourceManager、DFSZKFailoverController、NameNode、Worker |
解压安装
tar -zxvf apache-hive-3.1.2-bin.tar.gz
mv apache-hive-3.1.2-bin ../hive-3.1.2
分发
scp -r /opt/hive-3.1.2/ hadoop@master2:/opt/
环境变量
在master、master2上配置
vim ~/.bashrc #最好在/etc/profile中也做一遍
#最下面添加
#hive
export HIVE_HOME=/opt/hive-3.1.2
export PATH=$PATH:$HIVE_HOME/bin
#保存后编译
source ~/.bashrc
配置文件
metastore服务端
在master上配置
hive-site.xml
cd hive-3.1.2/conf/
cp hive-default.xml.template hive-site.xml
vim hive-site.xml
<configuration>
<property><!--数据库连接地址,使用MySQL存储元数据信息-->
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.21.1:3306/hive?useSSL=false</value>
</property>
<property><!--数据库驱动-->
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
</property>
<property><!--数据库用户名-->
<name>javax.jdo.option.ConnectionUserName</name>
<value>hadoop</value>
<description>Username to use against metastore database</description>
</property>
<property><!--密码-->
<name>javax.jdo.option.ConnectionPassword</name>
<value>hadoop</value>
<description>password to use against metastore database</description>
</property>
<property><!--本地表的默认位置-->
<name>hive.metastore.warehouse.dir</name>
<value>/opt/hive-3.1.2/warehouse</value>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/opt/hive-3.1.2/tmp</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/opt/hive-3.1.2/tmp</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/opt/hive-3.1.2/tmp</value>
<description>Location of Hive run time structured log file</description>
</property>
</configuration>
hive-env.sh
cd hive-3.1.2/conf/
cp hive-env.sh.template hive-env.sh
vim hive-env.sh
# Set HADOOP_HOME to point to a specific hadoop install directory
export HADOOP_HOME=/opt/hadoop-3.2.0/
# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=/opt/hive-3.1.2/conf
# Folder containing extra libraries required for hive compilation/execution can be controlled by:
export HIVE_AUX_JARS_PATH=/opt/hive-3.1.2/lib
客户端
在master2上配置
hive-site.xml
</configuration>
<property><!--这个一定要有-->
<name>hive.metastore.uris</name><!--Hive连接到该URI请求远程元存储的元数据-->
<value>thrift://master:9083</value>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/opt/hive-3.1.2/tmp</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/opt/hive-3.1.2/tmp</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/opt/hive-3.1.2/tmp</value>
<description>Location of Hive run time structured log file</description>
</property>
</configuration>
hive-env文件与服务端相同即可(主要是路径添加)
HDFS
切换到hadoop,建立以下文件目录(需要指定的话):
hadoop fs -mkdir -p /usr/hive/tmp
hadoop fs -mkdir -p /usr/hive/logs
hadoop fs -mkdir -p /usr/hive/warehouse
hdfs dfs -chmod -R 777 /usr/hive/tmp
hdfs dfs -chmod -R 777 /usr/hive/logs
hdfs dfs -chmod -R 777 /usr/hive/warehouse
Mysql驱动
可以去官网下载 https://dev.mysql.com/downloads/connector/j/
选择对应的系统版本,再解压安装后拷贝到/opt/hive-3.1.2/lib
验证
服务端启动服务
schematool -initSchema -dbType mysql #初始化
hive --service metastore -p 9083 & #启动服务
客户端启动服务
hive
hive>show databases;
参考文章
版权声明:本文为优快云博主「三石君1991」的原创文章,hive on spark(yarn)安装部署
原文链接:https://blog.youkuaiyun.com/weixin_43860247/article/details/89184081
版权声明:本文为优快云博主「三石君1991」的原创文章,Hive安装部署
原文链接:https://blog.youkuaiyun.com/weixin_43860247/article/details/89087941