hive3.1.2安装

下载安装包

https://pan.baidu.com/s/17qYstZwDRV5tjkysCfeEZw

提取码:ue1l

解压到指定目录,如/opt/bigdata/hive-3.1.2

配置环境变量

export HIVE_HOME=/opt/bigdata/hive-3.1.2
export PATH=$PATH:$HIVE_HOME/bin

修改配置文件hive-site.xml

Hive的默认配置文件是hive-default.xml.template,但里面有些错误。因此这里新建了一个hive-site.xml文件,具体配置如下:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
         <value>jdbc:mysql://xxx.xxx.xxx.xxx:3306/hive?createDatabaseIfNotExist=true&amp;useUnicode=true&amp;characterEncoding=UTF-8&amp;useSSL=false&amp;allowPublicKeyRetrieval=true</value>
         <description>hive表如果不存在则自动创建</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>hive</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>hive</value>
    </property>
    <property>
        <name>datanucleus.readOnlyDatastore</name>
        <value>false</value>
    </property>
    <property>
        <name>datanucleus.fixedDatastore</name>
        <value>false</value>
    </property>
    <property>
        <name>datanucleus.autoCreateSchema</name>
        <value>true</value>
    </property>
    <property>
        <name>datanucleus.schema.autoCreateAll</name>
        <value>true</value>
    </property>
    <property>
        <name>datanucleus.autoCreateTables</name>
        <value>true</value>
    </property>
    <property>
        <name>datanucleus.autoCreateColumns</name>
        <value>true</value>
    </property>
    <property>
        <name>hive.metastore.local</name>
        <value>false</value>
    </property>
    <!-- 显示表的列名 -->
    <property>
        <name>hive.cli.print.header</name>
        <value>true</value>
    </property>
    <!-- 显示数据库名称 -->
    <property>
        <name>hive.cli.print.current.db</name>
        <value>true</value>
    </property>
    <property>
        <name>hive.exec.local.scratchdir</name>
        <value>/opt/data/hive/tmp</value>
        <description>Local scratch space for Hive jobs</description>
    </property>
    <property>
        <name>hive.downloaded.resources.dir</name>
        <value>/opt/data/hive/tmp/${hive.session.id}_resources</value>
        <description>Temporary local directory for added resources in the remote file system.</description>
    </property>
    <property>
        <name>hive.querylog.location</name>
        <value>/opt/data/hive/tmp</value>
        <description>Location of Hive run time structured log file</description>
    </property>
    <property>
        <name>hive.server2.logging.operation.log.location</name>
        <value>/opt/data/hive/tmp/operation_logs</value>
        <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
    </property>
    <!--hive元数据,配置这个之后要启动hive --service元数据服务,否则启动hive客户端时会报元数据找不到-->
    <property>
        <name>hive.metastore.uris</name>
        <value>thrift://flink1:9083</value>
    </property>
</configuration>

在hdfs上创建目录

在hdfs上创建/user/hive/warehouse和/tmp/hive两个目录(hive-site.xml中的默认配置),并授予读写权限:

hdfs dfs -mkdir -p /tmp/hive
hdfs dfs -mkdir -p /user/hive/warehouse
hdfs dfs -chmod 777 /tmp/hive
hdfs dfs -chmod 777 /user/hive/warehouse

创建hive临时文件的存储路径

       在本地创建目录/opt/data/hive/tmp,并授予读写权限    

替换低版本guava

比较Hadoop(share/hadoop/common/lib)和hive(lib)的guava版本,将低版本的替换成高版本的,否则启动的时候会报错。 

将mysql连接器放到hive/lib下

下载mysql连接器,解压后将mysql-connector-java-5.1.48-bin.jar放到hive/lib下

wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gz

mysql创建hive用户

用root用户登录mysql(有授权权限),创建hive用户:


create user 'hive'@'%' identified by 'hive';//创建hive用户,并设置密码为hive
grant all privileges on *.* to 'hive'@'%' with grant option;// 赋予hive用户对所有数据库所有表的所有权限且任何地址都能建立连接“%”,并具有授予权。

修改配置文件hive-env.sh

export HADOOP_HOME=/opt/bigdata/hadoop-3.1.3
export HIVE_CONF_DIR=/opt/bigdata/hive-3.1.2/conf
export HIVE_AUX_JARS_PATH=/opt/bigdata/hive-3.1.2/lib

初始化数据库 

在hive/bin下执行schematool -initSchema -dbType mysql

启动hive元数据服务 

nohup ./bin/hive --service metastore 2>&1 >> /var/log.log &

启动hive客户端

[root@flink1 conf]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/bigdata/hive-3.1.2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/bigdata/hadoop-3.1.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
which: no hbase in (/opt/bigdata/hadoop-3.1.3/sbin:/opt/bigdata/hadoop-3.1.3/bin:/opt/bigdata/hadoop-3.1.3/sbin:/opt/bigdata/hadoop-3.1.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bigdata/jdk1.8.0_181/bin:/opt/bigdata/zookeeper-3.4.8/bin:/opt/bigdata/spark-3.0.0-bin-hadoop3.2/bin:/root/bin:/opt/bigdata/jdk1.8.0_181/bin:/opt/bigdata/zookeeper-3.4.8/bin:/opt/bigdata/spark-3.0.0-bin-hadoop3.2/bin:/opt/bigdata/hive-3.1.2/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/bigdata/hadoop-3.1.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/bigdata/hive-3.1.2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2020-11-26 23:41:53,496 INFO  [main] conf.HiveConf (HiveConf.java:findConfigFile(187)) - Found configuration file file:/opt/bigdata/hive-3.1.2/conf/hive-site.xml
2020-11-26 23:41:55,244 WARN  [main] conf.HiveConf (HiveConf.java:initialize(5220)) - HiveConf of name hive.metastore.local does not exist
2020-11-26 23:41:58,708 WARN  [main] conf.HiveConf (HiveConf.java:initialize(5220)) - HiveConf of name hive.metastore.local does not exist
Hive Session ID = 10f43ff5-0c8b-4215-bd8c-9d889d218831
2020-11-26 23:41:58,741 INFO  [main] SessionState (SessionState.java:printInfo(1227)) - Hive Session ID = 10f43ff5-0c8b-4215-bd8c-9d889d218831

Logging initialized using configuration in file:/opt/bigdata/hive-3.1.2/conf/hive-log4j2.properties Async: true
2020-11-26 23:41:58,866 INFO  [main] SessionState (SessionState.java:printInfo(1227)) -
Logging initialized using configuration in file:/opt/bigdata/hive-3.1.2/conf/hive-log4j2.properties Async: true
2020-11-26 23:42:00,654 INFO  [main] session.SessionState (SessionState.java:createPath(790)) - Created HDFS directory: /tmp/hive/root/10f43ff5-0c8b-4215-bd8c-9d889d218831
2020-11-26 23:42:00,715 INFO  [main] session.SessionState (SessionState.java:createPath(790)) - Created local directory: /opt/data/hive/tmp/10f43ff5-0c8b-4215-bd8c-9d889d218831
2020-11-26 23:42:00,736 INFO  [main] session.SessionState (SessionState.java:createPath(790)) - Created HDFS directory: /tmp/hive/root/10f43ff5-0c8b-4215-bd8c-9d889d218831/_tmp_space.db
2020-11-26 23:42:00,781 INFO  [main] conf.HiveConf (HiveConf.java:getLogIdVar(5040)) - Using the default value passed in for log id: 10f43ff5-0c8b-4215-bd8c-9d889d218831
2020-11-26 23:42:00,781 INFO  [main] session.SessionState (SessionState.java:updateThreadName(441)) - Updating thread name to 10f43ff5-0c8b-4215-bd8c-9d889d218831 main
2020-11-26 23:42:00,968 WARN  [10f43ff5-0c8b-4215-bd8c-9d889d218831 main] conf.HiveConf (HiveConf.java:initialize(5220)) - HiveConf of name hive.metastore.local does not exist
2020-11-26 23:42:02,829 INFO  [10f43ff5-0c8b-4215-bd8c-9d889d218831 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(441)) - Trying to connect to metastore with URI thrift://flink1:9083
2020-11-26 23:42:02,973 INFO  [10f43ff5-0c8b-4215-bd8c-9d889d218831 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(517)) - Opened a connection to metastore, current connections: 1
2020-11-26 23:42:03,030 INFO  [10f43ff5-0c8b-4215-bd8c-9d889d218831 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(570)) - Connected to metastore.
2020-11-26 23:42:03,031 INFO  [10f43ff5-0c8b-4215-bd8c-9d889d218831 main] metastore.RetryingMetaStoreClient (RetryingMetaStoreClient.java:<init>(97)) - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
2020-11-26 23:42:03,413 INFO  [10f43ff5-0c8b-4215-bd8c-9d889d218831 main] CliDriver (SessionState.java:printInfo(1227)) - Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Hive Session ID = 3acd9a32-ae46-45b2-844f-47949a99aa77
2020-11-26 23:42:03,424 INFO  [pool-7-thread-1] SessionState (SessionState.java:printInfo(1227)) - Hive Session ID = 3acd9a32-ae46-45b2-844f-47949a99aa77
2020-11-26 23:42:03,489 INFO  [pool-7-thread-1] session.SessionState (SessionState.java:createPath(790)) - Created HDFS directory: /tmp/hive/root/3acd9a32-ae46-45b2-844f-47949a99aa77
2020-11-26 23:42:03,494 INFO  [pool-7-thread-1] session.SessionState (SessionState.java:createPath(790)) - Created local directory: /opt/data/hive/tmp/3acd9a32-ae46-45b2-844f-47949a99aa77
2020-11-26 23:42:03,525 INFO  [pool-7-thread-1] session.SessionState (SessionState.java:createPath(790)) - Created HDFS directory: /tmp/hive/root/3acd9a32-ae46-45b2-844f-47949a99aa77/_tmp_space.db
2020-11-26 23:42:03,754 INFO  [pool-7-thread-1] metadata.HiveMaterializedViewsRegistry (HiveMaterializedViewsRegistry.java:run(171)) - Materialized views registry has been initialized
hive (default)>

在日志中可以看到,Trying to connect to metastore with URI thrift://flink1:9083这么一句话,并且连接成功了。

 

如果没启动metastore服务,则使用hive客户端的时候会报错:

2020-11-26 23:37:56,443 INFO  [13cd42e7-50e1-4218-88e3-fdd153057089 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(441)) - Trying to connect to metastore with URI thrift://flink1:9083
2020-11-26 23:37:56,445 WARN  [13cd42e7-50e1-4218-88e3-fdd153057089 main] metastore.HiveMetaStoreClient (HiveMetaStoreClient.java:open(526)) - Failed to connect to the MetaStore Server...

测试创建数据库

>create database test;

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值