前提 hadoop 集群已经启动并搭建完毕,mysq已经安装完毕
1.解压hive
tar -zvxf apache-hive-0.14.0-bin.tar.gz -C /usr/local/
mv apache-hive-0.14.0-bin/ hive
2备份配置文件
cp hive-env.sh.template hive-env.sh
cp hive-default.xml.template hive-site.xml
3修改hive-env.sh
export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export HIVE_HOME=/usr/local/hive
4.修改hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hadoop0:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>root</value>
</property>
<property>
<name>hive.querylog.location</name>
<value>/usr/local/hive/tmp</value>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/usr/local/hive/tmp</value>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/usr/local/hive-0.14.0/tmp</value>
</property>
若元数据库不存在 自动创建
<!--auto create-->
<property>
<name>datanucleus.readOnlyDatastore</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>false</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateColumns</name>
<value>true</value>
</property>
5、拷贝mysql驱动到$HIVE_HOME/lib目录下
cp mysql-connector-java-5.1.17.jar $HIVE_HOME/lib/
6、启动Hive进入hive cli终端
[root@hadoop0 bin]cd $HIVE_HOME/bin
[root@hadoop0 bin]./hive或者
[root@hadoop0 bin]./hive --service cli
7.hive对外提供 metastore服务
修改 hive-site.xml 添加如下属性
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop:9083</value>
<description>Thrift URI for the remote metastore. ...</description>
</property>
启动 metastore服务
hive –service metastore &
启动 server服务
hive –service hiveserver2 &