Mysql的安装:https://blog.youkuaiyun.com/qq_41028958/article/details/80820397
HIVE的安装步奏:
解压
[hadoop@master app]$ tar -zxvf apache-hive-2.1.1-bin.tar.gz -C /usr/local/soft
[hadoop@master soft]$ pwd
/usr/local/soft
配置环境变量:
vi ~.bashrc
export HIVE_HOME=/usr/local/soft/hive-2.1.1
export PATH=$PATH:$HIVE_HOME/bin
[hadoop@master soft]$ source ~/.bashrc
进入hive-2.1.1/conf 目录
[hadoop@master conf]$ pwd
/usr/local/soft/hive-2.1.1/conf
执行以下命令
cp hive-env.sh.template hive-env.sh
cp hive-default.xml.template hive-site.xml
cp hive-log4j2.properties.template hive-log4j2.properties
cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties
[hadoop@master conf]$ vi hive-env.sh
添加以下内容
export JAVA_HOME=/usr/local/soft/jdk1.8.0
export HADOOP_HOME=/usr/local/soft/hadoop-2.7.3
export HIVE_HOME=/usr/local/soft/hive-2.1.1
export HIVE_CONF_DIR=/usr/local/soft/hive-2.1.1/conf
创建hdfs目录,用于配置hive-site.xml
在根目下,执行以下命令
hdfs dfs -mkdir -p /user/hive/warehouse
hdfs dfs -mkdir -p /user/hive/tmp
hdfs dfs -mkdir -p /user/hive/log
hdfs dfs -chmod -R 777 /user/hive/warehouse
hdfs dfs -chmod -R 777 /user/hive/tmp
hdfs dfs -chmod -R 777 /user/hive/log
[hadoop@master conf]$ vi hive-site.xml
<configuration>
<property>
<name>hive.exec.scratchdir</name>
<value>/user/hive/tmp</value>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/usr/local/soft/hive-2.1.1/tmp</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/usr/local/soft/hive-2.1.1/tmp/${hive.session.id}_resources</value>
</property>
<property>
<name>hive.querylog.location</name>
<value>/user/hive/log</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>
<property>
<name>hive.hwi.listen.port</name>
<value>9999</value>
</property>
<property>
<name>hive.cli.print.header</name>
<value>true</value>
<description>Whether to print the names of the columns in query output.</description>
</property>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
<description>Whether to include the current database in the Hive prompt.</description>
</property>
</configuration>
修改core-site.xml文件
<property>
<name>hadoop.proxyuser.hive.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hive.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
为Hive建立相应的Mysql账户,并赋予权限:
用户及权限
create user 'root'@'192.168.178.100' identified by '123456'; //能够远程访问 (Navicat for MySQL 连接)
GRANT ALL PRIVILEGES on *.* to 'root'@'%' identified by '123456'; // 从任何机器都可远程访问
flush privileges; // 刷新系统权限相关表
netstat -nat //查看3306端口是否开启 ,测试navicat
在hadoop用户下进行;
suod systemctl start mysqld.service //启动mysql服务
mysql -uroot -p'123456' //单引号内为初始密码
CREATE USER 'hive' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' WITH GRANT OPTION;
create database hive;
flush privileges;
mysql的jdbc驱动包下载地址:https://dev.mysql.com/downloads/connector/j/
将mysql-connector-java-***-bin.jar放入hive目录下的lib文件夹中
进入hive目录下的bin目录,运行命令:
./schematool -initSchema -dbType mysql
输入hive,启动

至此hive的安装完成