clickhouse集群搭建
集群原理拓扑图
主机环境:
ip地址 zookeeper集群 clickhouse集群
192.168.3.59 zookeeper-1 clickhouse-1
192.168.3.151 zookeeper-2 clickhouse-2
192.168.3.242 zookeeper-3 clickhouse-3
192.168.3.228 clickhouse-4
一、搭建zookeeper集群
环境介绍
部署方式:docker-compose 部署
主机3PCS:
192.168.3.59
192.168.3.151
192.168.3.242
所用镜像:wurstmeister/zookeeper
目录结构
zookeeper-01/
├── docker-compose.yml
└── zoo
├── conf
│ ├── configuration.xsl
│ ├── log4j.properties
│ └── zoo.cfg
└── data
└── myid
# 配置好一份后修改docker-compose中主机名、容器名,修改myid中的值。
修改配置:
docker-compose
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:latest
hostname: zookeeper-01
container_name: zookeeper-01
restart: always
network_mode: host
ports:
- "2182:2182"
- "2888:2888"
- "3888:3888"
volumes:
- /etc/localtime:/etc/localtime:ro
- ./zoo/conf:/opt/zookeeper-3.4.13/conf
- ./zoo/data/myid:/opt/zookeeper-3.4.13/data/myid
healthcheck:
test: ["CMD-SHELL", "netstat -tnlp|grep :2182 || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 15s
./zoo/conf/zoo.cfg
# 添加以下配置
globalOutstandingLimit=200
minSessionTimeout=16000
maxSessionTimeout=30000
server.1=192.168.3.59:2888:3888
server.2=192.168.3.151:2888:3888
server.3=192.168.3.242:2888:3888
./zoo/data/myid
1
其他主机配置:
# 配置好一份后修改docker-compose中主机名、容器名,修改myid中的值。
# myid值根据server.x=...的 x 值
# 如 192.168.3.242 主机,myid值为3
server.1=192.168.3.59:2888:3888 >> myid >> 1
server.2=192.168.3.151:2888:3888 >> myid >> 2
server.3=192.168.3.242:2888:3888 >> myid >> 3
启动:
docker-compose up -d
检查
[root@Node1 zookeeper-01]# docker exec -it zookeeper-01 /bin/bash
root@zookeeper-01:/opt/zookeeper-3.4.13# bash bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
Mode: leader
[root@Node2 zookeeper-02]# docker exec -it zookeeper-02 /bin/bash
root@zookeeper-02:/opt/zookeeper-3.4.13# bash bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
Mode: follower
[root@Node3 docker-compose]# docker exec -it zookeeper-03 /bin/bash
root@zookeeper-03:/opt/zookeeper-3.4.13# bash bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
Mode: follower
# 其中有一个节点是leader,有两个节点是follower,证明zookeeper集群是部署成功的
# 节点1选举为leader节点,其余为follower节点。
二、搭建clickhouse集群
环境介绍
部署方式:docker-compose 部署
主机4PCS:
192.168.3.59 分片01、副本01
192.168.3.151 分片01、副本02
192.168.3.242 分片02、副本01
192.168.3.228 分片02、副本02
所用镜像:yandex/clickhouse-server
目录结构
clickhouse-01/
├── config.xml # 配置文件
├── data
├── docker-compose.yml
├── log
├── metrika.xml # 集群信息
└── users.xml # 用户配置
修改配置:
docker-compose
version: '2'
services:
clickhouse-server:
image: yandex/clickhouse-server
hostname: clickhouse-01
container_name: clickhouse-01
restart: always
network_mode: host
ports:
- 8123:8123
- 9000:9000
volumes:
- /etc/localtime:/etc/localtime:ro
- ./config.xml:/etc/clickhouse-server/config.xml
- ./users.xml:/etc/clickhouse-server/users.xml
- ./metrika.xml:/etc/clickhouse-server/metrika.xml
- ./data:/var/lib/clickhouse
- ./log:/var/log/clickhouse-server
config.xml
#改动配置
<listen_host>0.0.0.0</listen_host> # 开放访问
...
<timezone>Asia/Shanghai</timezone> # 配置时区
...
<include_from>/etc/clickhouse-server/metrika.xml</include_from> # 配置集群文件
<macros incl="macros" optional="true"/> .
#optional="true" 表示如果 macros 文件不存在,ClickHouse 也不会报错。
...
<interserver_http_host>192.168.3.59</interserver_http_host> # 修改内部通讯地址
...
metrika.xml
<yandex>
<clickhouse_remote_servers>
<ck_cluster>
<!-- 数据分片1 -->
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>192.168.3.59</host>
<port>9000</port>
<!-- #注意:若在users.xml中设置了用户和密码,此处要添加<user>test</user><password>123</password> -->
<user>user</user>
<password>password</password>
</replica>
<replica>
<host>192.168.3.151</host>
<port>9000</port>
<user>user</user>
<password>password</password>
</replica>
</shard>
<!-- 数据分片2 -->
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>192.168.3.242</host>
<port>9000</port>
<user>user</user>
<password>password</password>
</replica>
<replica>
<host>192.168.3.228</host>
<port>9000</port>
<user>user</user>
<password>password</password>
</replica>
</shard>
</ck_cluster>
</clickhouse_remote_servers>
<!-- zookeeper 集群信息 -->
<zookeeper-servers>
<node index="1">
<host>192.168.3.59</host>
<port>2182</port>
</node>
<node index="2">
<host>192.168.3.151</host>
<port>2182</port>
</node>
<node index="3">
<host>192.168.3.242</host>
<port>2182</port>
</node>
</zookeeper-servers>
<macros>
<shard>01</shard>
<!-- #注意:每台服务器的配置文件。写自己的IP地址 -->
<replica>192.168.3.59</replica>
</macros>
<networks>
<ip>::</ip>
</networks>
<clickhouse_compression>
<case>
<min_part_size>10000000000</min_part_size>
<min_part_size_ratio>0.01</min_part_size_ratio>
<method>lz4</method>
</case>
</clickhouse_compression>
</yandex>
分片副本解释:
配置修改
macros字段
根据每台机器的分片副本,配置。如:192.168.3.242为第2分片的第一副本这样配置:
<macros>
<shard>02</shard> # 分片信息
<replica>192.168.3.242</replica> # 副本信息(可以为本机ip地址或者如副本1则修改为1)
</macros>
启动
docker-compose up -d
检查
配置后可以再每台机器上用命令查看是否成功
# 第2分片第一副本的机器会显示:
select * from system.macros
┌─macro───┬─substitution────────────┐
│ replica │ 02 │
│ shard │ 192.168.3.242 │
└─────────┴─────────────────────────┘
查看集群信息:
select * from system.clusters;
# 两分片、两副本集群