在上一篇Hadoop集群搭建的基础上搭建zookeeper集群
Hadoop集群](https://editor.youkuaiyun.com/md/?articleId=106998862)
配置zoo.cfg
切换到/opt/zookeeper/conf目录下
vi zoo.cfg(若文件名不对应,则修改文件名)
# The number of milliseconds of each tick
tickTime=2000
maxClientCnxns=0 //添加
# The number of ticks that the initial
# synchronization phase can take
initLimit=50 //修改
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/hadoop/zookeeper //修改存储目录
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=hadoop01:2888:3888
server.2=hadoop02:2888:3888
server.3=hadoop03:2888:3888 //添加信息,集群内有多少虚拟机就添加多少条,并修改对应的主机名
配置完成后将配置文件传输到集群内其他的虚拟机对应目录下,覆盖其原有zoo.cfg文件
scp zoo.cfg root@192.168.109.111 /opt/zookeeper/conf
在集群内各虚拟机上/opt/hadoop目录下新建zookeeper文件夹
mkdir /opt/hadoop/zookeeper
在创建的zookeeper文件夹下添加myid文件,文件内容为zoo.cfg文件下,主机名对应的server.num的num:vi myid
在集群内各个节点上启动zookeeper
切换到/opt/zookeeper/bin/目录下
./zkServer.sh start:启动zookeeper
jps检查zookeeper是否启动成功
每个节点上都有QuorumPeerMain进程