hbase安装与基础使用

hbase的安装

1、standalone
2、Pseudo-Distributed Local Install
3、Advanced - Fully Distributed
gp181602 192.168.137.112 zk hregionserver hmaster
gp181603 192.168.137.113 zk hregionserver hmaster_backup
gp181604 192.168.137.114 zk hregionserver hmaster_backup

1、上传并解压,配置环境变量

2、配置hbase的配置文件
vi hbase-env.sh

JAVA_HOME
.# Tell HBase whether it should manage it's own instance of Zookeeper or not.
 export HBASE_MANAGES_ZK=false

注意点:如果启动出问题,则禁用以下配置

.# Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+
export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"

vi regionservers

vi backup-masters

vi hbase-site.xml

<!--指定hbase的根目录-->
<property>
  <name>hbase.rootdir</name>
  <value>hdfs://node1/hbase</value>
</property>

<!--配置集群是否是分布式集群-->
<property>
  <name>hbase.cluster.distributed</name>
  <value>true</value>
</property>

<!--指定zk集群的地址-->
<property>
  <name>hbase.zookeeper.quorum</name>
  <value>node2:2181,node3:2181,node4:2181</value>
</property>

<!-- 指定hbase的监控页面端口 -->
<property>
  <name>hbase.master.info.port</name>
  <value>60010</value>
</property>

注意事项:
如果hdfs是高可用集群,需要把hadoop下的两个配置文件(hdfs-site.xml\core-site.xml)copy到$HBASE_HOME/conf

3、分发安装包

scp -r hbase-1.2.1 root@gp181603:$PWD

4、启动
1、先启动zookeeper
2、启动hadoop
3、启动hbase

端口号:
hbase的web ui 60010

hmaster:16010
hregionserver:16030

时间同步

hbase的shell

hbase shell

帮助:

help
help "COMMAND"
help "COMMAND_GROUP"

hbase没有库的概念,但是有命名空间或者组的概念,namespace相当于库
hbase默认有两个组:
default
hbase

hbase基础操作与使用

namespace:

list
list_namespace
list_namespace_tables 'ns1'
create_namespace 'ns1'
describe_namespace 'ns1'
alter_namespace 'ns1', {METHOD => 'set', 'NAME' => 'gaoyuanyuan'}
alter_namespace 'ns1', {METHOD => 'unset', NAME=>'NAME'}
drop_namespace 'ns1'

ddl:
Group name: ddl
Commands: alter, alter_async, alter_status, create,
describe, disable, disable_all, drop, drop_all, enable,
enable_all, exists, get_table, is_disabled, is_enabled,
list, locate_region, show_filters

create 'ns1:t1','f1'
describe 'ns1:t1'
create 'ns1:t2', {NAME => 'f1', VERSIONS => 3, TTL => 2592000, BLOCKCACHE => true,BLOOMFILTER => 'ROWCOL'}
alter 'ns1:t1', NAME => 'f1', VERSIONS => 5

修改表:有则更新,无则新增

alter 'ns1:t1', {NAME => 'f1',IN_MEMORY => true}, {NAME => 'f2', IN_MEMORY => true}, {NAME => 'f3', VERSIONS => 5}

删除列簇:

alter 'ns1:t1', NAME => 'f1', METHOD => 'delete'
alter 'ns1:t1', 'delete' => 'f1'

先禁用表,后删除

disable 'ns1:t2'
drop 'ns1:t2' 

Group name: dml

  Commands: append, count, delete, deleteall, get, get_counter, 
  get_splits, incr, put, scan, truncate, truncate_preserve

新增数据,一次只能执行一个操作(没有批量操作的概念)

put 'ns1:t1','rk00001','f2:name','gaoyuanyuan'
put 'ns1:t1','rk00001','f2:age','18'
put 'ns1:t1','rk00001','f2:sex','2'
put 'ns1:t1','rk00002','f2:name','laochen'
put 'ns1:t1','rk00002','f2:age','15'
put 'ns1:t1','rk00002','f2:sex','1'

1、数据在rowkey按照字典顺序排序
2、数据在列簇上按照字典顺序排序
3、数据在列上按照字典顺序排序

get 'ns1:t1','rk00001'

Group name: tools
Commands: assign, balance_switch, balancer, balancer_enabled,
catalogjanitor_enabled, catalogjanitor_run, catalogjanitor_switch,
close_region, compact, compact_rs, flush, major_compact,
merge_region, move, normalize, normalizer_enabled, normalizer_switch,
split, trace, unassign, wal_roll, zk_dump
hbase的api
基础api
Apache HBase APIs
http://hbase.apache.org/book.html#_examples
高级api filter
68. Client Request Filters
http://hbase.apache.org/book.html#client.filter

hbase的高级应用

scan.setStartRow(oldRow+"\001")
limit 3;

while(true)
do
scan.setStartRow(oldRow+"\001")
limit 3;

table.getScanner();
println
if (count < 3)
break;
done
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值