Hadoop-2.7.3环境下Hive-2.1.1安装配置。

本文详细介绍在Ubuntu 16.0.4环境下Hive 2.1.1的安装步骤,包括解压、配置文件修改、MySQL驱动添加、环境变量设置等关键环节,并提供了hive-site.xml与hive-env.sh的具体配置示例。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

环境:ubuntu-16.0.4;jdk1.8.0_111;apache-hadoop-2.7.3;apache-hive-2.1.1。

这里只记录Hive的安装。Hive只需要安装到一个节点上即可。我这里是装在Namenode上的。

首先从官网上下载所需要的版本,本人下载的apache-hive-2.1.1-bin.tar.gz。放到用户主目录下面。

(1)解压:

         $tar -zxvf apache-hive-2.1.1-bin.tar.gz

(2)进入到conf目录:

         $cd apache-hive-2.1.1-bin/bin/conf

         $ls

         会看到有下面这些文件:

beeline-log4j2.properties.template  hive-exec-log4j2.properties.template  llap-cli-log4j2.properties.template
hive-default.xml.template           hive-log4j2.properties.template       llap-daemon-log4j2.properties.template
hive-env.sh.template                ivysettings.xml                       parquet-logging.properties

              然后在conf路径下,执行以下几个命令

             $cp hive-default.xml.template hive-default.xml

             $cp hive-env.sh.template hive-env.sh

             $cp hive-default.xml hive-site.xml

(3)添加mysql驱动:

         下载mysql-connector-java-x.y.z-bin.jar文件并放到apache-hive-2.1.1-bin/lib目录下面。

(4)设置路径及环境变量:

          $sudo mv apache-hive-2.1.1-bin /usr/local/

          $sudo vim /etc/profile

          添加HIVE_HOME。

          source /etc/profile

(5)修改hive-site.xml及hive-env.sh相关配置

         将hive-site.xml文件中的内容修改为如下所示:

    

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>hive.querylog.location</name>
        <value>/user/hivetmp</value>
        <description>Location of Hive run time structured log file</description>
    </property>
    <property>
        <name>hive.metastore.warehouse.dir</name>
        <value>/user/hive</value>
    </property>
    <property>  
        <name>javax.jdo.option.ConnectionURL</name>  
        <value>jdbc:mysql://192.168.244.3:3306/hive?createDatabaseIfNotExist=true&useSSL=false&useUnicode=true&characterEncoding=UTF-8</value>  
        <description>JDBC connect string for a JDBC metastore</description>      
    </property>     
    <property>   
        <name>javax.jdo.option.ConnectionDriverName</name>   
        <value>com.mysql.jdbc.Driver</value>   
        <description>Driver class name for a JDBC metastore</description>       
    </property>                 
  
    <property>   
        <name>javax.jdo.option.ConnectionUserName</name>  
        <value>hive</value>  
        <description>username to use against metastore database</description>  
    </property>  
    <property>    
        <name>javax.jdo.option.ConnectionPassword</name>  
        <value>123456</value>  
        <description>password to use against metastore database</description>    
    </property> 
    
    <property>  
        <name>hive.metastore.uris</name>  
        <value>thrift://192.168.244.3:9083</value>      
    </property>
    <property>
        <name>hive.metastore.local</name>
        <value>false</value>
    </property>
    <property>
        <name>hive.cli.print.header</name>
        <value>true</value>
    </property>


</configuration>

          将hive-env.sh文件修改为如下所示:

         

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Set Hive and Hadoop environment variables here. These variables can be used
# to control the execution of Hive. It should be used by admins to configure
# the Hive installation (so that users do not have to set environment variables
# or set command line parameters to get correct behavior).
#
# The hive service being invoked (CLI/HWI etc.) is available via the environment
# variable SERVICE


# Hive Client memory usage can be an issue if a large number of clients
# are running at the same time. The flags below have been useful in 
# reducing memory usage:
#
# if [ "$SERVICE" = "cli" ]; then
#   if [ -z "$DEBUG" ]; then
#     export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParNewGC -XX:-UseGCOverheadLimit"
#   else
#     export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit"
#   fi
# fi

# The heap size of the jvm stared by hive shell script can be controlled via:
#
# export HADOOP_HEAPSIZE=1024
 export HADOOP_HEAPSIZE=1024
#
# Larger heap size may be required when running queries over large number of files or partitions. 
# By default hive shell scripts use a heap size of 256 (MB).  Larger heap size would also be 
# appropriate for hive server (hwi etc).


# Set HADOOP_HOME to point to a specific hadoop install directory
# HADOOP_HOME=${bin}/../../hadoop
 HADOOP_HOME=/usr/local/hadoop  #这里设置成自己的hadoop路径
# Hive Configuration Directory can be controlled by:
# export HIVE_CONF_DIR=
 export HIVE_CONF_DIR=/usr/local/apache-hive-2.1.1-bin/conf
# Folder containing extra ibraries required for hive compilation/execution can be controlled by:
# export HIVE_AUX_JARS_PATH=
export HIVE_AUX_JARS_PATH=/usr/local/apache-hive-2.1.1-bin/lib


 (6)在mysql里创建hive用户,并赋予其足够权限

       1.$mysql -u root -p

       2.mysql> create user 'hive' identified by '123456';
          Query OK, 0 rows affected (0.00 sec)


         mysql> grant all privileges on *.* to 'hive' with grant option;
         Query OK, 0 rows affected (0.00 sec)


         mysql> flush privileges;
         Query OK, 0 rows affected (0.01 sec)

    (7)设置元数据库

        $schematool -initSchema -dbType mysql

        

       看到有completed提示,即设置元数据库成功。

     (8)修改日志存放位置

       日志分为系统日志与Job日志,Job日志在hive-sitel.xml的hive.querylog.location属性已经配置,现在只需要配置系统日志即可。

        在conf目录下,有个hive-log4j2.properties.template文件,执行命令

        cp hive-log4j2.properties.template hive-log4j2.properties复制一份。

        然后修改hive-log4j2.properties中的

        property.hive.log.dir =你想要的路径

        

完成以上配置后,这个时候,节点需要启动metastore服务:

$hive --service metastore &

如果一直没反应,按回车,可以使用jobs命令查看是否启动成功

启动成功后,就可以执行hive命令。


[root@master apache-hive-2.1.1-bin]# bin/hive which: no hbase in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/java/jdk1.8.0_144/bin:/usr/zookeeper/zookeeper-3.4.10/bin:/usr/hadoop/hadoop-2.7.3/bin:/usr/hadoop/hadoop-2.7.3/sbin:/root/bin:/usr/java/jdk1.8.0_144/bin:/usr/zookeeper/zookeeper-3.4.10/bin:/usr/hadoop/hadoop-2.7.3/bin:/usr/hadoop/hadoop-2.7.3/sbin:/usr/hive/apache-hive-2.1.1-bin/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hive/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Logging initialized using configuration in jar:file:/usr/hive/apache-hive-2.1.1-bin/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:591) at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: org.apache.hadoop.hiv
03-13
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值