07: hadoop常用组件 、Kafka集群

案例1:Zookeeper安装
案例2:Kafka集群实验
案例3:Hadoop高可用
案例4:高可用验证

1 案例1:Zookeeper安装

1.1 问题

本案例要求:
搭建Zookeeper集群并查看各服务器的角色
停止Leader并查看各服务器的角色

1.2 步骤

实现此案例需要按照如下步骤进行。

步骤一:安装Zookeeper

1)编辑/etc/hosts ,所有集群主机可以相互 ping 通(在nn01上面配置,同步到node1,node2,node3)

[root@nn01 hadoop]# vim /etc/hosts
192.168.1.60 nn01
192.168.1.61 node1
192.168.1.62 node2
192.168.1.63 node3
192.168.1.66 node4
[root@nn01 hadoop]# for i in {62…64}
do
scp /etc/hosts 192.168.1.$i:/etc/
done //同步配置
hosts 100% 253 639.2KB/s 00:00
hosts 100% 253 497.7KB/s 00:00
hosts 100% 253 662.2KB/s 00:00

2)安装 java-1.8.0-openjdk-devel,由于之前的hadoop上面已经安装过,这里不再安装,若是新机器要安装
3)zookeeper 解压拷贝到 /usr/local/zookeeper

[root@nn01 ~]# tar -xf zookeeper-3.4.13.tar.gz
[root@nn01 ~]# mv zookeeper-3.4.13 /usr/local/zookeeper

4)配置文件改名,并在最后添加配置

[root@nn01 ~]# cd /usr/local/zookeeper/conf/
[root@nn01 conf]# ls
configuration.xsl log4j.properties zoo_sample.cfg
[root@nn01 conf]# mv zoo_sample.cfg zoo.cfg
[root@nn01 conf]# chown root.root zoo.cfg
[root@nn01 conf]# vim zoo.cfg
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
server.4=nn01:2888:3888:observer

5)拷贝 /usr/local/zookeeper 到其他集群主机

[root@nn01 conf]# for i in {62…64}; do rsync -aSH --delete /usr/local/zookeeper/ 192.168.1.$i:/usr/local/zookeeper -e ‘ssh’ & done
[4] 4956
[5] 4957
[6] 4958

6)创建 mkdir /tmp/zookeeper,每一台都要

[root@nn01 conf]# mkdir /tmp/zookeeper
[root@nn01 conf]# ssh node1 mkdir /tmp/zookeeper
[root@nn01 conf]# ssh node2 mkdir /tmp/zookeeper
[root@nn01 conf]# ssh node3 mkdir /tmp/zookeeper

7)创建 myid 文件,id 必须与配置文件里主机名对应的 server.(id) 一致

[root@nn01 conf]# echo 4 >/tmp/zookeeper/myid
[root@nn01 conf]# ssh node1 ‘echo 1 >/tmp/zookeeper/myid’
[root@nn01 conf]# ssh node2 ‘echo 2 >/tmp/zookeeper/myid’
[root@nn01 conf]# ssh node3 ‘echo 3 >/tmp/zookeeper/myid’

8)启动服务,单启动一台无法查看状态,需要启动全部集群以后才能查看状态,每一台上面都要手工启动(以nn01为例子)

[root@nn01 conf]# /usr/local/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/…/conf/zoo.cfg
Starting zookeeper … STARTED
注意:刚启动zookeeper查看状态的时候报错,启动的数量要保证半数以上,这时再去看就成功了

9)查看状态

[root@nn01 conf]# /usr/local/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/…/conf/zoo.cfg
Mode: observe
[root@nn01 conf]# /usr/local/zookeeper/bin/zkServer.sh stop
//关闭之后查看状态其他服务器的角色
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/…/conf/zoo.cfg
Stopping zookeeper … STOPPED
[root@nn01 conf]# yum -y install telnet
[root@nn01 conf]# telnet node3 2181
Trying 192.168.1.24…
Connected to node3.
Escape character is ‘^]’.
ruok //发送
imokConnection closed by foreign host. //imok回应的结果

10)利用 api 查看状态(nn01上面操作)

[root@nn01 conf]# /usr/local/zookeeper/bin/zkServer.sh start
[root@nn01 conf]# vim api.sh
#!/bin/bash
function getstatus(){
exec 9<>/dev/tcp/KaTeX parse error: Expected 'EOF', got '&' at position 35: … echo stat >&̲9 MODE=(cat <&9 |grep -Po “(?<=Mode:).*”)
exec 9<&-
echo KaTeX parse error: Expected 'EOF', got '}' at position 14: {MODE:-NULL} }̲ for i in node{…{i}\t"
getstatus ${i}
done
[root@nn01 conf]# chmod 755 api.sh
[root@nn01 conf]# ./api.sh
node1 follower
node2 leader
node3 follower
nn01 observer

2 案例2:Kafka集群实验

2.1 问题

本案例要求:
利用Zookeeper搭建一个Kafka集群
创建一个topic
模拟生产者发布消息
模拟消费者接收消息

2.2 步骤

实现此案例需要按照如下步骤进行。

步骤一:搭建Kafka集群

1)解压 kafka 压缩包

Kafka在node1,node2,node3上面操作即可
[root@node1 hadoop]# tar -xf kafka_2.12-2.1.0.tgz

2)把 kafka 拷贝到 /usr/local/kafka 下面

[root@node1 ~]# mv kafka_2.12-2.1.0 /usr/local/kafka

3)修改配置文件 /usr/local/kafka/config/server.properties

[root@node1 ~]# cd /usr/local/kafka/config
[root@node1 config]# vim server.properties
broker.id=22
zookeeper.connect=node1:2181,node2:2181,node3:2181

4)拷贝 kafka 到其他主机,并修改 broker.id ,不能重复

[root@node1 config]# for i in 63 64; do rsync -aSH --delete /usr/local/kafka 192.168.1.$i:/usr/local/; done
[1] 27072
[2] 27073
[root@node2 ~]# vim /usr/local/kafka/config/server.properties
//node2主机修改
broker.id=23
[root@node3 ~]# vim /usr/local/kafka/config/server.properties
//node3主机修改
broker.id=24

5)启动 kafka 集群(node1,node2,node3启动)

[root@node1 local]# /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
[root@node1 local]# jps //出现kafka
26483 DataNode
27859 Jps
27833 Kafka
26895 QuorumPeerMain

6)验证配置,创建一个 topic

[root@node1 local]# /usr/local/kafka/bin/kafka-topics.sh --create --partitions 1 --replication-factor 1 --zookeeper node3:2181 --topic aa
Created topic “aa”.

7) 模拟生产者,发布消息

[root@node2 ~]# /usr/local/kafka/bin/kafka-console-producer.sh
–broker-list node2:9092 --topic aa //写一个数据
ccc
ddd

8)模拟消费者,接收消息

[root@node3 ~]# /usr/local/kafka/bin/kafka-console-consumer.sh \
–bootstrap-server node1:9092 --topic aa //这边会直接同步
ccc
ddd
注意:kafka比较吃内存,做完这个kafka的实验可以把它停了

3 案例3:Hadoop高可用

3.1 问题

本案例要求:
配置Hadoop的高可用
修改配置文件

3.2 方案

配置Hadoop的高可用,解决NameNode单点故障问题,使用之前搭建好的hadoop集群,新添加一台nn02,ip为192.168.1.66,具体要求如图-1所示:
在这里插入图片描述
图-1

3.3 步骤

实现此案例需要按照如下步骤进行。

步骤一:hadoop的高可用

1)停止所有服务(由于 kafka的实验做完之后就已经停止,这里不在重复)

[root@nn01 ~]# cd /usr/local/hadoop/
[root@nn01 hadoop]# ./sbin/stop-all.sh //停止所有服务

2)启动zookeeper(需要一台一台的启动)这里以nn01为例子

[root@nn01 hadoop]# /usr/local/zookeeper/bin/zkServer.sh start
[root@nn01 hadoop]# sh /usr/local/zookeeper/conf/api.sh //利用之前写好的脚本查看
node1 follower
node2 leader
node3 follower
nn01 observer

3)新加一台机器nn02,这里之前有一台node4,可以用这个作为nn02

[root@node4 ~]# echo nn02 > /etc/hostname
[root@node4 ~]# hostname nn02

4)修改vim /etc/hosts

[root@nn01 hadoop]# vim /etc/hosts
192.168.1.60 nn01
192.168.1.66 nn02
192.168.1.61 node1
192.168.1.62 node2
192.168.1.63 node3

5)同步到nn02,node1,node2,node3

[root@nn01 hadoop]# for i in {61…63} 66; do rsync -aSH --delete /etc/hosts 192.168.1.$i:/etc/hosts -e ‘ssh’ & done
[1] 14355
[2] 14356
[3] 14357
[4] 14358

6)配置SSH信任关系(nn02直接复用nn01的公私钥)

注意:nn01和nn02互相连接不需要密码,nn02连接自己和node1,node2,node3同样不需要密码
[root@nn02 ~]# vim /etc/ssh/ssh_config
Host *
GSSAPIAuthentication yes
StrictHostKeyChecking no
[root@nn01 hadoop]# cd /root/.ssh/
[root@nn01 .ssh]# scp id_rsa id_rsa.pub nn02:/root/.ssh/
或者rsync -aXSH /root/.ssh nn02:/roo/
//把nn01的公钥私钥考给nn02

7)所有的主机删除/var/hadoop/*(重启5台虚拟机释放内存)

nn01 nn02 node1 …3
[root@nn01 .ssh]# rm -rf /var/hadoop/*

修改配置文件

vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh
JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-2.b14el7.x86_64/jre"//Java安装路径
HADOOP_CONF_DIR="/usr/local/hadoop//etc/hadoop" //hadoop配置文件路径
vim /usr/local/hadoop/etc/hadoop/slaves //所有集群中DataNode节点的名称node1…3
node1
node2
node3

8)配置 core-site

[root@nn01 .ssh]# vim /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name> //默认使用的文件系统
<value>hdfs://nsd1906</value> //修改原来的hdfs://nn01:900的配置
//nsd1906是随便起的组名。相当于一个组,访问的时候访问这个组
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/hadoop</value>
</property>
<property>
<name>ha.zookeeper.quorum</name> //需要使用zookeeper协调集群间事务的一致性,需要zookeeper集群,下面指定集群地址
<value>node1:2181,node2:2181,node3:2181</value> //指定zookeeper集群的地址三台,不能配一台,可以任意坏一台
</property>
<property>
<name>hadoop.proxyuser.nfs.groups</name>
<value>*
</property>
<property>
<name>hadoop.proxyuser.nfs.hosts</name>
<value>*</value>
</property>
</configuration>

9)配置 hdfs-site

[root@nn01 ~]# vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.nameservices</name>//名称组名(一般先在这里定义组名,再去core-site里使用组名),替换了之前单台的namenode和secondarynamenode的配置,这里不需要了
<value>nsd1906</value>//与core-site里的一致
</property>
<property>
<name>dfs.ha.namenodes.nsd1906</name>
//组里面定义了两个角色,nn1,nn2名称固定,是内置的变量,nsd1906里面有nn1,nn2
<value>nn1,nn2</value> //角色名,不是主机名,nn1代表nn01,nn2代表nn02
</property>
<property>
<name>dfs.namenode.rpc-address.nsd1902.nn1</name>
//nn1与nn2通过rpc端口进行通信
//声明nn1 8020为通讯端口,是nn01的rpc通讯端口 heartbeat心跳连接
<value>nn01:8020</value>
</property>
<property>
dfs.namenode.rpc-address.nsd1906.nn2</name>
//声明nn2 8020为通讯端口,是nn02的rpc通讯端口
<value>nn02:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.nsd1906.nn1</name> //组名.角色,定义nodename1,fsimags必须确保所有datanode知道两台namenode的地址,datanode与namenode通过8020rpc端口分别建立通信连接,把最新的块的位置给两台namenode都发一次。
//nn01的http通讯端口
<value>nn01:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.nsd1906.nn2</name> //组名.角色,定义nodename2
//nn02的http通讯端口
<value>nn02:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name> //定义journalnode在哪台机器上,在三台上装,本身就是高可用,
//指定namenode元数据存储在journalnode中的路径
<value>qjournal://node1:8485;node2:8485;node3:8485/nsdcluster\
fsedit数据变更日志存储在node1,node2,node3上,
但是并未指定存在那个目录下,需要一个文件夹,下面dfs.journalnode.edits.dir指定

</property>
<property>
<name>dfs.journalnode.edits.dir</name>
//指定journalnode数据变更日志(fsedit)文件存储的路径
<value>/var/hadoop/journal</value>
</property>
//以上完成了fsimage和fsedit的配置,下面配置高可用软件zookeeper failover,高可用软件其实是Java写的一个类
<property>
<name>dfs.client.failover.proxy.provider.nsdcluster</name>
//指定HDFS客户端连接active namenode的java类
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>//高可用软件,查手册出来的
</property>
当发生故障的时候主设备要与备机进行设备切换,通过ssh管理备机
<property>
<name>dfs.ha.fencing.methods</name> //配置隔离机制为ssh,管理备份机器使用的方法
<value>sshfence
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name> //指定密钥的位置
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name> //开启自动故障转移,不用报警
<value>true</value>
</property>
</configuration>
mapreduce 管理器配置为yarn,mapreduce用什么作为他的管理器,单机是local集群是yarn
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value> //只配置这一项
</property>
</configuration>

10)配置yarn-site

[root@nn01 ~]# vim /usr/local/hadoop/etc/hadoop/yarn-site.xml

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.resourcemanager.ha.enabled</name> //启用高可用配置
    <value>true</value>
</property> 
<property>
    <name>yarn.resourcemanager.ha.rm-ids</name>    //定义两个角色rm1,rm2代表nn01和nn02角色
    <value>rm1,rm2</value>
</property>
<property>
    <name>yarn.resourcemanager.recovery.enabled</name> //rm=resourcemanager自动管理切换
    <value>true</value>
</property>
<property>
    <name>yarn.resourcemanager.store.class</name>  //rm 使用的一个软件Java写的一个类

org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore


yarn.resourcemanager.zk-address 
//zookeeper服务器主机地址,rm实现高可用也需要zookeeper
node1:2181,node2:2181,node3:2181


yarn.resourcemanager.cluster-id //rm ha起名叫yarn-ha
yarn-ha


yarn.resourcemanager.hostname.rm1
//现在有两台rm,组名.角色名1,定义两个角色对应的主机是什么
nn01


yarn.resourcemanager.hostname.rm2
//现在有两台rm,组名.角色名2
nn02


至此,保存退出hadoop六个配置文件配置完成

11)同步到nn02,node1,node2,node3

[root@nn01 ~]# for i in {61…63} 66; do rsync -aSH --delete /usr/local/hadoop/ 192.168.1.$i:/usr/local/hadoop -e ‘ssh’ & done
[1] 25411
[2] 25412
[3] 25413
[4] 25414

12)删除所有机器上面的/user/local/hadoop/logs,方便排错

[root@nn01 ~]# for i in {60…63} 66; do ssh 192.168.1.$i rm -rf /usr/local/hadoop/logs ; done

13)同步配置

[root@nn01 ~]# for i in {61…63} 66; do rsync -aSH --delete /usr/local/hadoop 192.168.1.$i:/usr/local/hadoop -e ‘ssh’ & done
[1] 28235
[2] 28236
[3] 28237
[4] 28238

4 案例4:高可用验证

4.1 问题

本案例要求:
初始化集群
验证集群

4.2 步骤

实现此案例需要按照如下步骤进行。

步骤一:验证hadoop的高可用

1)初始化ZK集群

[root@nn01 ~]# /usr/local/hadoop/bin/hdfs zkfc -formatZK

18/09/11 15:43:35 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/nsdcluster in ZK //出现Successfully即为成功

2)在node1,node2,node3上面启动journalnode服务(以node1为例子)

[root@node1 ~]# /usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-node1.out
[root@node1 ~]# jps
29262 JournalNode
26895 QuorumPeerMain
29311 Jps

3)格式化,先在node1,node2,node3上面启动journalnode才能格式化

[root@nn01 ~]# /usr/local/hadoop/bin/hdfs namenode -format
//出现Successfully即为成功
[root@nn01 hadoop]# ls /var/hadoop/
dfs

4)nn02数据同步到本地 /var/hadoop/dfs

[root@nn02 ~]# cd /var/hadoop/
[root@nn02 hadoop]# ls
[root@nn02 hadoop]# rsync -aSH nn01:/var/hadoop/ /var/hadoop/
[root@nn02 hadoop]# ls
dfs

5)初始化 JNS

[root@nn01 hadoop]# /usr/local/hadoop/bin/hdfs namenode -initializeSharedEdits
Re-format filesystem in QJM to [192.168.1.61:8485, 192.168.1.62:8485, 192.168.1.63:8485] ? (Y or N) y
19/10/21 17:06:10 INFO namenode.FileJournalManager: Recovering unfinalized segments in /var/hadoop/dfs/name/current
19/10/21 17:06:10 INFO client.QuorumJournalManager: Starting recovery process for unclosed journal segments…
18/09/11 16:26:15 INFO client.QuorumJournalManager: Successfully started new epoch 1 //出现Successfully,成功开启一个节点

6)停止 journalnode 服务(node1,node2,node3)

[root@node1 hadoop]# /usr/local/hadoop/sbin/hadoop-daemon.sh stop journalnode
stopping journalnode
[root@node1 hadoop]# jps
29346 Jps
26895 QuorumPeerMain

步骤二:启动集群

1)nn01上面操作

[root@nn01 hadoop]# /usr/local/hadoop/sbin/start-all.sh //启动所有集群
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [nn01 nn02]
nn01: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-nn01.out
nn02: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-nn02.out
node2: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node2.out
node3: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node3.out
node1: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node1.out
Starting journal nodes [node1 node2 node3]
node1: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-node1.out
node3: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-node3.out
node2: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-node2.out
Starting ZK Failover Controllers on NN hosts [nn01 nn02]
nn01: starting zkfc, logging to /usr/local/hadoop/logs/hadoop-root-zkfc-nn01.out
nn02: starting zkfc, logging to /usr/local/hadoop/logs/hadoop-root-zkfc-nn02.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-nn01.out
node2: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-node2.out
node1: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-node1.out
node3: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-node3.out

2)nn02上面操作

[root@nn02 hadoop]# /usr/local/hadoop/sbin/yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-nn02.out

3)查看集群状态

[root@nn01 hadoop]# /usr/local/hadoop/bin/hdfs haadmin -getServiceState nn1
active
[root@nn01 hadoop]# /usr/local/hadoop/bin/hdfs haadmin -getServiceState nn2
standby
[root@nn01 hadoop]# /usr/local/hadoop/bin/yarn rmadmin -getServiceState rm1
active
[root@nn01 hadoop]# /usr/local/hadoop/bin/yarn rmadmin -getServiceState rm2
standby

4)查看节点是否加入

[root@nn01 hadoop]# /usr/local/hadoop/bin/hdfs dfsadmin -report

Live datanodes (3): //会有三个节点

[root@nn01 hadoop]# /usr/local/hadoop/bin/yarn node -list
Total Nodes:3
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
node2:43307 RUNNING node2:8042 0
node1:34606 RUNNING node1:8042 0
node3:36749 RUNNING node3:8042 0

步骤三:访问集群

1)查看并创建

[root@nn01 hadoop]# /usr/local/hadoop/bin/hadoop fs -ls /
[root@nn01 hadoop]# /usr/local/hadoop/bin/hadoop fs -mkdir /aa //创建aa
[root@nn01 hadoop]# /usr/local/hadoop/bin/hadoop fs -ls / //再次查看
Found 1 items
drwxr-xr-x - root supergroup 0 2018-09-11 16:54 /aa
[root@nn01 hadoop]# /usr/local/hadoop/bin/hadoop fs -put *.txt /aa
[root@nn01 hadoop]# /usr/local/hadoop/bin/hadoop fs -ls hdfs://nsdcluster/aa
//也可以这样查看
Found 3 items
-rw-r–r-- 2 root supergroup 86424 2018-09-11 17:00 hdfs://nsdcluster/aa/LICENSE.txt
-rw-r–r-- 2 root supergroup 14978 2018-09-11 17:00 hdfs://nsdcluster/aa/NOTICE.txt
-rw-r–r-- 2 root supergroup 1366 2018-09-11 17:00 hdfs://nsdcluster/aa/README.txt

2)验证高可用,关闭 active namenode

[root@nn01 hadoop]# /usr/local/hadoop/bin/hdfs haadmin -getServiceState nn1
active
[root@nn01 hadoop]# /usr/local/hadoop/sbin/hadoop-daemon.sh stop namenode
stopping namenode
[root@nn01 hadoop]# /usr/local/hadoop/bin/hdfs haadmin -getServiceState nn1
//再次查看会报错
[root@nn01 hadoop]# /usr/local/hadoop/bin/hdfs haadmin -getServiceState nn2
//nn02由之前的standby变为active
active
[root@nn01 hadoop]# /usr/local/hadoop/bin/yarn rmadmin -getServiceState rm1
active
[root@nn01 hadoop]# /usr/local/hadoop/sbin/yarn-daemon.sh stop resourcemanager
//停止resourcemanager
[root@nn01 hadoop]# /usr/local/hadoop/bin/yarn rmadmin -getServiceState rm2
active

3) 恢复节点

[root@nn01 hadoop]# /usr/local/hadoop/sbin/hadoop-daemon.sh start namenode
//启动namenode
[root@nn01 hadoop]# /usr/local/hadoop/sbin/yarn-daemon.sh start resourcemanager
//启动resourcemanager
[root@nn01 hadoop]# /usr/local/hadoop/bin/hdfs haadmin -getServiceState nn1
//查看
[root@nn01 hadoop]# /usr/local/hadoop/bin/yarn rmadmin -getServiceState rm1
//查看

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

网安小白的进阶之路

学习就是一点一滴的积累过程!

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值