脚本启动关闭hadoop+zookeeper+hbase

本文介绍了HBase集群的启动过程及基本操作,包括通过shell进行表的查询与扫描。此外,还展示了如何利用脚本自动启动和停止Hadoop、Zookeeper及HBase等组件。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

hbase(main):008:0> [hadoop@hdp01 ~]$ ll
total 44
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Desktop
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Documents
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Downloads
-rwxr-xr-x  1 hadoop hadoop  378 11月 18 12:45 McDbDownAll.sh
-rwxr-xr-x  1 hadoop hadoop  387 11月 18 12:45 McDbUpAll.sh
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Music
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Pictures
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Public
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Templates
drwxrwxr-x  2 hadoop hadoop 4096 11月 17 10:01 test

drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Videos


按实际修改运行目录:

[hadoop@hdp01 ~]$ cat McDbUpAll.sh
##!/bin/bash
echo 'start hadoop...'
/usr/local/bg/hadoop-2.7.1/sbin/start-all.sh
echo 'start zookeeper1...'
/opt/zookeeper-3.4.6/bin/zkServer.sh start
echo 'start zookeeper2...'
ssh hadoop@hdp02 "/opt/zookeeper-3.4.6/bin/zkServer.sh start"
echo 'start zookeeper3...'
ssh hadoop@hdp03 "/opt/zookeeper-3.4.6/bin/zkServer.sh start"
echo 'start hbase...'
/opt/hbase-1.2.4/bin/start-hbase.sh

[hadoop@hdp01 ~]$ cat McDbDownAll.sh
##!/bin/bash
echo 'stop hbase...'
/opt/hbase-1.2.4/bin/stop-hbase.sh
echo 'stop zookeeper3...'
ssh hadoop@hdp03 "/opt/zookeeper-3.4.6/bin/zkServer.sh stop"
echo 'stop zookeeper2...'
ssh hadoop@hdp02 "/opt/zookeeper-3.4.6/bin/zkServer.sh stop"
echo 'stop zookeeper1...'
/opt/zookeeper-3.4.6/bin/zkServer.sh stop
echo 'stop hadoop...'
/usr/local/bg/hadoop-2.7.1/sbin/stop-all.sh

测试:

[hadoop@hdp01 ~]$ ./McDbUpAll.sh
start hadoop...
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hdp01]
hdp01: starting namenode, logging to /usr/local/bg/hadoop-2.7.1/logs/hadoop-hadoop-namenode-hdp01.out
hdp03: starting datanode, logging to /usr/local/bg/hadoop-2.7.1/logs/hadoop-hadoop-datanode-hdp03.out
hdp02: starting datanode, logging to /usr/local/bg/hadoop-2.7.1/logs/hadoop-hadoop-datanode-hdp02.out
hdp04: ssh: connect to host hdp04 port 22: No route to host
Starting secondary namenodes [hdp01]
hdp01: starting secondarynamenode, logging to /usr/local/bg/hadoop-2.7.1/logs/hadoop-hadoop-secondarynamenode-hdp01.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/bg/hadoop-2.7.1/logs/yarn-hadoop-resourcemanager-hdp01.out
hdp03: starting nodemanager, logging to /usr/local/bg/hadoop-2.7.1/logs/yarn-hadoop-nodemanager-hdp03.out
hdp02: starting nodemanager, logging to /usr/local/bg/hadoop-2.7.1/logs/yarn-hadoop-nodemanager-hdp02.out
hdp04: ssh: connect to host hdp04 port 22: No route to host
start zookeeper1...
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
start zookeeper2...
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
start zookeeper3...
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
start hbase...
starting master, logging to /opt/hbase-1.2.4/bin/../logs/hbase-hadoop-master-hdp01.out
hdp03: starting regionserver, logging to /opt/hbase-1.2.4/bin/../logs/hbase-hadoop-regionserver-hdp03.out
hdp02: starting regionserver, logging to /opt/hbase-1.2.4/bin/../logs/hbase-hadoop-regionserver-hdp02.out


[hadoop@hdp01 ~]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hbase-1.2.4/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/bg/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2016-11-22 14:03:00,125 WARN  [main] conf.Configuration: hbase-site.xml:an attempt to override final parameter: dfs.replication;  Ignoring.
2016-11-22 14:03:01,100 WARN  [main] conf.Configuration: hbase-site.xml:an attempt to override final parameter: dfs.replication;  Ignoring.
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.4, r67592f3d062743907f8c5ae00dbbe1ae4f69e5af, Tue Oct 25 18:10:20 CDT 2016

hbase(main):001:0> list
TABLE                                                                                                                                                 

ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2293)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:900)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55650)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
    at java.lang.Thread.run(Thread.java:745)

Here is some help for this command:
List all tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples:

  hbase> list
  hbase> list 'abc.*'
  hbase> list 'ns:abc.*'
  hbase> list 'ns:.*'

hbase(main):006:0> list
TABLE                                                                                                                                                 
t1                                                                                                                                                    
1 row(s) in 0.0760 seconds

=> ["t1"]
hbase(main):007:0> scan 't1'
ROW                                    COLUMN+CELL                                                                                                    
 k1                                    column=c1:a, timestamp=1479371230254, value=value1                                                             
 k2                                    column=c1:b, timestamp=1479371248124, value=value2                                                             
2 row(s) in 0.4220 seconds

[hadoop@hdp01 ~]$ ./McDbDownAll.sh
stop hbase...
stopping hbase........................
stop zookeeper3...
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
stop zookeeper2...
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
stop zookeeper1...
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
stop hadoop...
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [hdp01]
hdp01: stopping namenode
hdp02: stopping datanode
hdp03: stopping datanode
hdp04: ssh: connect to host hdp04 port 22: No route to host
Stopping secondary namenodes [hdp01]
hdp01: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
hdp03: stopping nodemanager
hdp02: stopping nodemanager
hdp04: ssh: connect to host hdp04 port 22: No route to host
no proxyserver to stop
[hadoop@hdp01 ~]$



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值