Hive和HBase集成的配置
A: 修改hive.aux.jars.path属性,为属性添加hive和hbase集成所需要jar文件,注意必须通过file://指定本地文件,否则将会在hdfs上的路径中查询
<property>
<name>hive.aux.jars.path</name>
<value>file:///home/hadoop/hive-2.3.3/lib/hbase-server-1.1.1.jar,file:///home/hadoop/hive-2.3.3/lib/h
base-client-1.1.1.jar,file:///home/hadoop/hive-2.3.3/lib/hive-hbase-handler-2.2.0.jar,file:///home/hadoop
/hive-2.3.3/lib/zookeeper-3.4.6.jar</value>
</property>
B: 修改hive.zookeeper.quorum属性, 当hive和hbase集成时使用zookeeper调控服务
<property>
<name>hive.zookeeper.quorum</name>
<value>ping1,ping2,ping4</value>
</property>
C:添加hbase.zookeeper.quorum属性,当前集成的hbase是集群,hive在集成该hbase集群需要通过zookeeper调控服务
<property>
<name>hbase.zookeeper.quorum</name>
<value>ping1,ping2,ping4</value>
</property>
D: 修改hive环境配置hive-env.sh
添加 export HADOOP_HOME=/home/hadoop/hadoop-2.9.0
export HBASE_HOME=/home/hadoop/hbase-1.2.6
export HIVE_CONF_DIR=/home/hadoop/hive-2.3.3/conf
# export HIVE_AUX_JARS_PATH=?
实验一 : 在hive创建于base对应的临时表,通过向hive中的临时表插入文件,完成操作hbase对应表的数据
第一步:启动hive客户端
$> hive
进入hive shell后,创建与hbase对应的临时表
hive> create table table1(cid int,cname string) STORED BY'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES("hbase.columns.mapping"=":key,cf:cname") TBLPROPERTIES("hbase.table.name"="mytable1","hbase.mapred.output.outputtable"="mytable1");
STORED BY 表示使用hive配置的与hbase集成插件完成向hbase中存储
WITH SERDEPROPERTIES 表示创建hive临时对应hbase表的列族名称
TBLPROPERTIES 表示对应hbase中表的名称,hbase.mapred.output.outputtable表示mapreduce操作向hbase表中输出
从hive已有表emp中选取两个字段向hbase中的mytable1表中插入数据
insert overwrite table table1 select eno,ename from emp;
第二步:启动hbase shell,查看对应表是否完成数据记录的插入
- 使用HQL语句创建一个指向HBase的Hive表
CREATE TABLE hbase_table_1(key int, value string) //Hive中的表名hbase_table_1 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' //指定存储处理器 WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") //声明列族,列名 TBLPROPERTIES ("hbase.table.name" = "xyz", "hbase.mapred.output.outputtable" = "xyz"); //hbase.table.name声明HBase表名,为可选属性默认与Hive的表名相同, //hbase.mapred.output.outputtable指定插入数据时写入的表,如果以后需要往该表插入数据就需要指定该值
- 通过HBase shell可以查看刚刚创建的HBase表的属性
$ hbase shell HBase Shell; enter 'help<RETURN>' for list of supported commands. Version: 0.20.3, r902334, Mon Jan 25 13:13:08 PST 2010 hbase(main):001:0> list xyz 1 row(s) in 0.0530 seconds hbase(main):002:0> describe "xyz" DESCRIPTION ENABLED {NAME => 'xyz', FAMILIES => [{NAME => 'cf1', COMPRESSION => 'NONE', VE true RSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]} 1 row(s) in 0.0220 seconds
hbase(main):003:0> scan "xyz" ROW COLUMN+CELL 0 row(s) in 0.0060 seconds
- 使用HQL向HBase表中插入数据
INSERT OVERWRITE TABLE hbase_table_1 SELECT * FROM pokes WHERE foo=98;
- 在HBase端查看插入的数据
hbase(main):009:0> scan "xyz" ROW COLUMN+CELL 98 column=cf1:val, timestamp=1267737987733, value=val_98 1 row(s) in 0.0110 seconds
- 创建一个指向已经存在的HBase表的Hive表
CREATE EXTERNAL TABLE hbase_table_2(key int, value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = "cf1:val") TBLPROPERTIES("hbase.table.name" = "some_existing_table", "hbase.mapred.output.outputtable" = "some_existing_table");
- 建表或映射表的时候如果没有指定:key则第一个列默认就是行键
- HBase对应的Hive表中没有时间戳概念,默认返回的就是最新版本的值
- 由于HBase中没有数据类型信息,所以在存储数据的时候都转化为String类型
CREATE TABLE hbase_table_1(key int, value1 string, value2 int, value3 int) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ( "hbase.columns.mapping" = ":key,a:b,a:c,d:e" ); INSERT OVERWRITE TABLE hbase_table_1 SELECT foo, bar, foo+1, foo+2 FROM pokes WHERE foo=98 OR foo=100;
CREATE TABLE hbase_table_1(value map<string,int>, row_key int) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ( "hbase.columns.mapping" = "cf:,:key" ); INSERT OVERWRITE TABLE hbase_table_1 SELECT map(bar, foo), foo FROM pokes WHERE foo=98 OR foo=100;
- 在HBase下查看数据
hbase(main):012:0> scan "hbase_table_1" ROW COLUMN+CELL 100 column=cf:val_100, timestamp=1267739509194, value=100 98 column=cf:val_98, timestamp=1267739509194, value=98 2 row(s) in 0.0080 seconds
- 在Hive下查看数据
hive> select * from hbase_table_1; Total MapReduce jobs = 1 Launching Job 1 out of 1 ... OK {"val_100":100} 100 {"val_98":98} 98 Time taken: 3.808 seconds
CREATE TABLE hbase_table_1(key int, value map<int,int>) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ( "hbase.columns.mapping" = ":key,cf:" );
CREATE TABLE hbase_table_1(key int, value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ( "hbase.columns.mapping" = ":key,cf:" );
CREATE EXTERNAL TABLE delimited_example(key struct<f1:string, f2:string>, value string) ROW FORMAT DELIMITED COLLECTION ITEMS TERMINATED BY '~' STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ( 'hbase.columns.mapping'=':key,f:c1');
- 对HBase表进行预分区,增大其MapReduce作业的并行度
- 合理的设计rowkey使其尽可能的分布在预先分区好的Region上
- 通过set hbase.client.scanner.caching设置合理的扫描缓存