Hbase安装配置及数据测试

Hbase安装配置及数据测试

1、解压安装

首先切换到软件包所在目录

[hadoop@hadoop-master ~]$ cd /opt/software/
[hadoop@hadoop-master software]$ ll
总用量 3196904
-rw-rw-r--  1 hadoop hadoop  88977860 8月  21 10:56 apache-flume-1.10.1-bin.tar.gz
-rw-rw-r--  1 hadoop hadoop  12649765 8月  21 10:56 apache-zookeeper-3.7.1-bin.tar.gz
-rwxrwxr-x  1 hadoop hadoop      3924 8月  20 23:41 datax.sh
-rw-rw-r--  1 hadoop hadoop 853734462 10月 24 2022 datax.tar.gz
-rw-rw-r--. 1 hadoop hadoop 695457782 8月  19 18:09 hadoop-3.3.4.tar.gz
-rw-rw-r--  1 hadoop hadoop 322913840 8月  21 14:52 hbase-2.5.11-bin.tar.gz
-rw-rw-r--. 1 hadoop hadoop 354481098 8月  19 18:09 hive-3.1.3.tar.gz
-rw-rw-r--  1 hadoop hadoop  18522029 8月  20 16:35 hive-jdbc-uber-2.6.5.0-292.jar
-rwxrwxr-x. 1 hadoop hadoop      1345 8月  19 18:09 install_mysql.sh
-rw-rw-r--. 1 hadoop hadoop 195013152 8月  19 18:09 jdk-8u212-linux-x64.tar.gz
-rw-rw-r--  1 hadoop hadoop 105092106 8月  21 10:56 kafka_2.12-3.3.1.tgz
-rw-rw-r--. 1 hadoop hadoop  16916168 8月  19 18:09 mysql-community-client-8.0.31-1.el7.x86_64.rpm
-rw-rw-r--. 1 hadoop hadoop   2633904 8月  19 18:09 mysql-community-client-plugins-8.0.31-1.el7.x86_64.rpm
-rw-rw-r--. 1 hadoop hadoop    662344 8月  19 18:09 mysql-community-common-8.0.31-1.el7.x86_64.rpm
-rw-rw-r--. 1 hadoop hadoop   2218812 8月  19 18:09 mysql-community-icu-data-files-8.0.31-1.el7.x86_64.rpm
-rw-rw-r--. 1 hadoop hadoop   1582440 8月  19 18:09 mysql-community-libs-8.0.31-1.el7.x86_64.rpm
-rw-rw-r--. 1 hadoop hadoop    685968 8月  19 18:09 mysql-community-libs-compat-8.0.31-1.el7.x86_64.rpm
-rw-rw-r--. 1 hadoop hadoop  67166828 8月  19 18:09 mysql-community-server-8.0.31-1.el7.x86_64.rpm
-rw-rw-r--. 1 hadoop hadoop   2515447 8月  19 18:09 mysql-connector-j-8.0.31.jar
-rw-------  1 hadoop hadoop      3989 8月  19 20:51 nohup.out
-rw-rw-r--  1 hadoop hadoop 299350810 8月  19 21:33 spark-3.3.1-bin-hadoop3.tgz
-rw-rw-r--  1 hadoop hadoop  17953604 8月  19 22:14 sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz
-rw-rw-r--  1 hadoop hadoop   1152112 8月  20 19:40 sqoop-1.4.7.tar.gz
drwxrwxr-x  2 hadoop hadoop         6 8月  20 14:34 sqoop_bak
[hadoop@hadoop-master software]$ 

执行解压命令,我解压在/opt/module目录下

[hadoop@hadoop-master software]$ tar -zxvf hbase-2.5.11-bin.tar.gz -C /opt/module/
hbase-2.5.11/lib/client-facing-thirdparty/log4j-core-2.17.2.jar
hbase-2.5.11/lib/client-facing-thirdparty/log4j-slf4j-impl-2.17.2.jar
hbase-2.5.11/lib/client-facing-thirdparty/log4j-1.2-api-2.17.2.jar
hbase-2.5.11/lib/client-facing-thirdparty/audience-annotations-0.13.0.jar
hbase-2.5.11/lib/zkcli/jline-2.11.jar
hbase-2.5.11/lib/trace/opentelemetry-javaagent-1.15.0.jar
hbase-2.5.11/lib/jdk11/javax.activation-1.2.0.jar
hbase-2.5.11/lib/jdk11/jakarta.activation-api-1.2.2.jar
hbase-2.5.11/lib/jdk11/jaxws-rt-2.3.7.jar
hbase-2.5.11/lib/jdk11/policy-2.7.10.jar
hbase-2.5.11/lib/jdk11/ha-api-3.1.13.jar
hbase-2.5.11/lib/jdk11/management-api-3.2.3.jar
hbase-2.5.11/lib/jdk11/stax-ex-1.8.3.jar
hbase-2.5.11/lib/jdk11/streambuffer-1.5.10.jar
hbase-2.5.11/lib/jdk11/mimepull-1.9.15.jar
hbase-2.5.11/lib/jdk11/FastInfoset-1.2.18.jar
hbase-2.5.11/lib/jdk11/saaj-impl-1.5.3.jar
hbase-2.5.11/lib/jdk11/jakarta.xml.ws-api-2.3.3.jar
hbase-2.5.11/lib/jdk11/jakarta.xml.bind-api-2.3.3.jar
hbase-2.5.11/lib/jdk11/jakarta.xml.soap-api-1.4.2.jar
hbase-2.5.11/lib/jdk11/jakarta.jws-api-2.1.0.jar
hbase-2.5.11/lib/jdk11/jakarta.annotation-api-1.3.5.jar
[hadoop@hadoop-master software]$ 

切换至解压目录,用ll查看一下

[hadoop@hadoop-master software]$ cd /opt/module/
[hadoop@hadoop-master module]$ ll
总用量 0
drwxr-xr-x  11 hadoop hadoop 215 12月  8 2021 datax
drwxrwxr-x   7 hadoop hadoop 204 8月  21 12:53 flume
drwxr-xr-x  13 hadoop hadoop 249 8月  19 20:12 hadoop
drwxrwxr-x   7 hadoop hadoop 164 8月  21 15:44 hbase-2.5.11
drwxrwxr-x   9 hadoop hadoop 153 8月  19 20:23 hive
drwxr-xr-x.  7 hadoop hadoop 245 4月   2 2019 jdk1.8
drwxr-xr-x   9 hadoop hadoop 182 8月  21 15:04 kafka
drwxr-xr-x  15 hadoop hadoop 235 8月  19 21:41 spark
drwxr-xr-x   8 hadoop hadoop 278 8月  20 19:45 sqoop
drwxrwxr-x   8 hadoop hadoop 157 8月  21 11:03 zookeeper
[hadoop@hadoop-master module]$

重命名一下文件夹名字方便管理,把名字改为hbase,ll查看一下所在目录文件

[hadoop@hadoop-master module]$ mv hbase-2.5.11/ hbase
[hadoop@hadoop-master module]$ ll
总用量 0
drwxr-xr-x  11 hadoop hadoop 215 12月  8 2021 datax
drwxrwxr-x   7 hadoop hadoop 204 8月  21 12:53 flume
drwxr-xr-x  13 hadoop hadoop 249 8月  19 20:12 hadoop
drwxrwxr-x   7 hadoop hadoop 164 8月  21 15:44 hbase
drwxrwxr-x   9 hadoop hadoop 153 8月  19 20:23 hive
drwxr-xr-x.  7 hadoop hadoop 245 4月   2 2019 jdk1.8
drwxr-xr-x   9 hadoop hadoop 182 8月  21 15:04 kafka
drwxr-xr-x  15 hadoop hadoop 235 8月  19 21:41 spark
drwxr-xr-x   8 hadoop hadoop 278 8月  20 19:45 sqoop
drwxrwxr-x   8 hadoop hadoop 157 8月  21 11:03 zookeeper
[hadoop@hadoop-master module]$

查看hbase目录下文件结构

[hadoop@hadoop-master module]$ cd hbase/
[hadoop@hadoop-master hbase]$ ll
总用量 2516
drwxr-xr-x  4 hadoop hadoop    4096 1月  22 2020 bin
-rw-r--r--  1 hadoop hadoop 1134363 1月  22 2020 CHANGES.md
drwxr-xr-x  2 hadoop hadoop     210 1月  22 2020 conf
drwxr-xr-x 10 hadoop hadoop    4096 1月  22 2020 docs
drwxr-xr-x  8 hadoop hadoop      94 1月  22 2020 hbase-webapps
-rw-r--r--  1 hadoop hadoop     262 1月  22 2020 LEGAL
drwxrwxr-x  8 hadoop hadoop   16384 8月  21 15:44 lib
-rw-r--r--  1 hadoop hadoop  146244 1月  22 2020 LICENSE.txt
-rw-r--r--  1 hadoop hadoop  636773 1月  22 2020 NOTICE.txt
-rw-r--r--  1 hadoop hadoop  622322 1月  22 2020 RELEASENOTES.md
[hadoop@hadoop-master hbase]$ 

2、添加环境变量

编辑此文件添加环境变量

[hadoop@hadoop-master hbase]$ vim ~/.bashrc 
#hbase
export HBASE_HOME=/opt/module/hbase
export PATH=$HBASE_HOME/bin:$PATH
export JAVA_HOME=/opt/module/jdk1.8
export PATH=$JAVA_HOME/bin:$PATH

编辑完成保存退出,执行下面这个命令使环境变量生效

[hadoop@hadoop-master hbase]$ source ~/.bashrc

3、修改Hbase配置文件

首先切换目录到habse的conf目录下

[hadoop@hadoop-master hbase]$ cd /opt/module/hbase/conf/
[hadoop@hadoop-master conf]$ ll
总用量 52
-rw-r--r-- 1 hadoop hadoop  1811 1月  22 2020 hadoop-metrics2-hbase.properties
-rw-r--r-- 1 hadoop hadoop  4773 1月  22 2020 hbase-env.cmd
-rw-r--r-- 1 hadoop hadoop 12588 1月  22 2020 hbase-env.sh
-rw-r--r-- 1 hadoop hadoop  2249 1月  22 2020 hbase-policy.xml
-rw-r--r-- 1 hadoop hadoop  2301 1月  22 2020 hbase-site.xml
-rw-r--r-- 1 hadoop hadoop  1245 1月  22 2020 log4j2-hbtop.properties
-rw-r--r-- 1 hadoop hadoop  5746 1月  22 2020 log4j2.properties
-rw-r--r-- 1 hadoop hadoop    10 1月  22 2020 regionservers

修改hbase-env.sh文件

[hadoop@hadoop-master conf]$ vim hbase-env.sh 
# The java implementation to use.  Java 1.8+ required.
export JAVA_HOME=/opt/module/jdk1.8
# The maximum amount of heap to use. Default is left to JVM default.
export HBASE_HEAPSIZE=800  # HBase Master/RegionServer 堆内存大小,可调
# Tell HBase whether it should manage it's own instance of ZooKeeper or not.
export HBASE_MANAGES_ZK=false   # 使用已有 ZooKeeper(推荐和 Hadoop 集群共用)

修改hbase-site.xml文件

[hadoop@hadoop-master conf]$ vim hbase-site.xml 
<property>
        <name>hbase.rootdir</name>
        <value>hdfs://hadoop-master:9000/hbase</value>
    </property>

    <!-- 使用外部 ZooKeeper -->
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>hadoop-master</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2181</value>
    </property>

    <!-- 允许在单节点伪分布式运行 -->
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>

    <!-- HDFS 缓存和日志配置,可选 -->
    <property>
        <name>hbase.regionserver.handler.count</name>
        <value>10</value>
    </property>

4、初始化HDFS目录

在linux终端命令行执行,在hdfs上面创建一个hbase的目录

[hadoop@hadoop-master conf]$ hdfs dfs -mkdir -p /hbase

授权hdfs上的hbase的使用权给hadoop用户

[hadoop@hadoop-master conf]$ hdfs dfs -chown -R hadoop:hadoop /hbase

5、启动Hbase

如果之前设置环境变量没有问题,回到用户家目录,直接输入start-hbase.sh可以直接启动。

[hadoop@hadoop-master ~]$ echo $HBASE_HOME                         #查看hbase环境变量是否设置成功
/opt/module/hbase
[hadoop@hadoop-master ~]$

输入下面命令进行启动

[hadoop@hadoop-master ~]$ start-hbase.sh                 #启动命令
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/module/hbase/lib/client-facing-thirdparty/log4j-slf4j-impl-2.17.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/module/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
running master, logging to /opt/module/hbase/logs/hbase-hadoop-master-hadoop-master.out
: running regionserver, logging to /opt/module/hbase/logs/hbase-hadoop-regionserver-hadoop-master.out

使用jps命令查看进程看是否成功运行hbase,要有HMaster和HRegionServer进程

[hadoop@hadoop-master ~]$ jps
50016 ResourceManager
56704 QuorumPeerMain
50566 JobHistoryServer
49383 NameNode
49513 DataNode
65197 HMaster
49748 SecondaryNameNode
65749 Jps
50137 NodeManager
50618 Master
50718 Worker
50782 RunJar
65406 HRegionServer

webui查看hbase:输入<你的主机ip>:端口号:16010

6、输入hbase shell进入命令行及创建数据

[hadoop@hadoop-master ~]$ hbase shell

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/module/hbase/lib/client-facing-thirdparty/log4j-slf4j-impl-2.17.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/module/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.5.11, r505b485e462c9cbd318116155b3e37204469085a, Mon Feb 17 16:48:36 PST 2025
Took 0.0022 seconds                                                                                                      
hbase:001:0> 

创建测试表空间(Namespace)

hbase:001:0> create_namespace 'test_ns'

创建测试表

hbase:006:0> create 'test_ns:mytable', 'cf1'


Created table test_ns:mytable
Took 1.2281 seconds                                                                                                      
=> Hbase::Table - test_ns:mytable

插入测试数据

hbase:007:0> put 'test_ns:mytable', 'row1', 'cf1:col1', 'value1'
Took 0.1320 seconds                                                                                                      
hbase:008:0> put 'test_ns:mytable', 'row2', 'cf1:col1', 'value2'
Took 0.0623 seconds                                                                                                      
hbase:009:0> 

查询数据

hbase:009:0> get 'test_ns:mytable', 'row1'
COLUMN                          CELL                                                                                     
 cf1:col1                       timestamp=2025-08-21T17:49:25.373, value=value1                                          
1 row(s)
Took 0.0838 seconds                                                                                                      
hbase:010:0> scan 'test_ns:mytable'
ROW                             COLUMN+CELL                                                                              
 row1                           column=cf1:col1, timestamp=2025-08-21T17:49:25.373, value=value1                         
 row2                           column=cf1:col1, timestamp=2025-08-21T17:49:32.932, value=value2                         
2 row(s)
Took 0.0464 seconds 

删除测试表

hbase:011:0> disable 'test_ns:mytable'
Took 1.2147 seconds                                                                                                      
hbase:012:0> drop 'test_ns:mytable'
Took 0.4252 seconds 

删除表空间,删除表空间前必须确保里面没有表

hbase:013:0> drop_namespace 'test_ns'
Took 0.1709 seconds   

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值