安装apache-hive-2.3.0
1:安装hive
用Xftp 把apache-hive-2.3.0-bin.tar.gz放到主节点/usr/tmp下面
解压:
tar –zxvf apache-hive-2.3.0-bin.tar.gz
添加到环境变量
vi /etc/profile
编辑
#hive
export HIVE_HOME=/usr/tmp/apache-hive-2.3.0-bin
export PATH=$PATH:$HIVE_HOME/bin
保存后使其生效:
source /etc/profile
2:配置hive
/*在hdfs中新建目录/user/hive/warehouse
hdfs dfs –mkdir /tmp
hdfs dfs –mkdir /user
hdfs dfs –mkdir /user/hive
hdfs dfs –mkdir /user/hive/warehouse*/
这个可以在eclipse里面创建文件夹,直接右键
然后在命令行里面输入:
# hadoop fs -chmod g+w /tmp
# hadoop fs -chmod g+w /user/hive/warehouse
将mysql的驱动jar包mysql-connector-java-5.1.39-bin.jar拷入hive的lib目录下面
进入hive的conf目录下面复制一下hive-default.xml.template名子命名为:hive-site.xml
# cp hive-default.xml.template hive-site.xml
然后修改hive-site.xml的配置文件
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://127.0.0.1:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBCmetastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBCmetastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>Username to use against metastoredatabase</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>sa</value>
<description>password to use against metastoredatabase</description>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/usr/tools/apache-hive-2.3.0-bin/tmp</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/usr/tools/apache-hive-2.3.0-bin/tmp/resources</value>
<description>Temporary local directory for added resources in theremote file system.</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/usr/tools/apache-hive-2.3.0-bin/tmp</value>
<description>Location of Hive run time structured logfile</description>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/usr/tools/apache-hive-2.3.0-bin/tmp/operation_logs</value>
<description>Top level directory where operation logs are storedif logging functionality is enabled</description>
</property>
然后初始化schema,在/apache-hive-2.3.0-bin目录下敲命令
# schematool -initSchema -dbType mysql
最后在在/apache-hive-2.3.0-bin目录下敲命令:hive