ClickHouse多实例部署

ClickHouse多实例部署

文章借鉴http://fuxkdb.com/2020/05/02/2020-05-02-ClickHouse多实例部署/
感谢大佬!
生产环境并不建议多实例部署, ClickHouse一个查询可以用到多个CPU, 本例只适用于测试环境
在这里插入图片描述

集群部署关系如下:

在这里插入图片描述

逻辑结构图如下:

在这里插入图片描述

编辑三台主机/etc/hosts添加如下内容:

172.16.120.10 centos-1
172.16.120.11 centos-2
172.16.120.12 centos-3

依赖组件安装

JDK

下载openjdk
wget https://github.com/AdoptOpenJDK/openjdk8-binaries/releases/download/jdk8u242-b08/OpenJDK8U-jdk_x64_linux_hotspot_8u242b08.tar.gz
tar -zxvf OpenJDK8U-jdk_x64_linux_hotspot_8u242b08.tar.gz -C /usr/local/
做软链
ln -s /usr/local/jdk8u242-b08 /usr/local/java
配置环境变量
#vi ~/.bashrc
 
export JAVA_HOME=/usr/local/java
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH

ZooKeeper

3.6.0有bug

所以改用稳定版本3.4.14

下载安装包
wget https://downloads.apache.org/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
tar -zxvf zookeeper-3.4.14.tar.gz -C /usr/local/
做软链
ln -s /usr/local/zookeeper-3.4.14 /usr/local/zookeeper
配置环境变量
#vi ~/.bashrc
 
export ZOOKEEPER_HOME=/usr/local/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin
修改配置文件
cd /usr/local/zookeeper/conf
 
 
#参考官方
#https://clickhouse.tech/docs/en/operations/tips/#zookeeper
#vim zoo.cfg
 
 
tickTime=2000
initLimit=30000
syncLimit=10
maxClientCnxns=2000
maxSessionTimeout=60000000
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/logs
autopurge.snapRetainCount=10
autopurge.purgeInterval=1
preAllocSize=131072
snapCount=3000000
leaderServes=yes
clientPort=2181
 
 
 
集群配置部分三个节点分别为:
# centos-1
server.1=0.0.0.0:2888:3888
server.2=172.16.120.11:2888:3888
server.3=172.16.120.12:2888:3888
  
  
# centos-2
server.1=172.16.120.10:2888:3888
server.2=0.0.0.0:2888:3888
server.3=172.16.120.12:2888:3888
  
  
# centos-3
server.1=172.16.120.10:2888:3888
server.2=172.16.120.11:2888:3888
server.3=0.0.0.0:2888:3888
创建目录
mkdir -p /data/zookeeper/{data,logs}
mkdir -p /usr/local/zookeeper/logs
myid
# centos-1
echo "1">/data/zookeeper/data/myid
  
# centos-2
echo "2">/data/zookeeper/data/myid
  
# centos-3
echo "3">/data/zookeeper/data/myid
配置zk日志

默认zk日志输出到一个文件,且不会自动清理,所以,一段时间后zk日志会非常大

1.zookeeper-env.sh ./conf目录下新建zookeeper-env.sh文件,修改到sudo chmod 755 zookeeper-env.sh权限
#cat conf/zookeeper-env.sh
 
#!/usr/bin/env bash
#tip:custom configurationfile,do not amend the zkEnv.sh file
#chang the log dir and output of rolling file
 
ZOO_LOG_DIR="/usr/local/zookeeper/logs"
ZOO_LOG4J_PROP="INFO,ROLLINGFILE"
2.log4j.properties 修改日志的输入形式
zookeeper.root.logger=INFO, ROLLINGFILE
#zookeeper.root.logger=INFO, CONSOLE
 
 
# Max log file size of 10MB
log4j.appender.ROLLINGFILE.MaxFileSize=128MB
# uncomment the next line to limit number of backup files
log4j.appender.ROLLINGFILE.MaxBackupIndex=10
配置运行zk的JVM

./conf目录下新建java.env文件,修改到sudo chmod 755 java.env权限,主要用于GC log,RAM等的配置.

#!/usr/bin/env bash
#config the jvm parameter in a reasonable
#note that the shell is source in so that do not need to use export
#set java  classpath
#CLASSPATH=""
#set jvm start parameter , also can set JVMFLAGS variable
SERVER_JVMFLAGS="-Xms1024m -Xmx2048m $JVMFLAGS"
启动zookeeper服务(所有节点)
# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
验证zk
# zkServer.sh status
 
#bin/zkCli.sh -server 127.0.0.1:2181
Connecting to 127.0.0.1:2181
 
 
[zk: 127.0.0.1:2181(CONNECTED) 0]
[zk: 127.0.0.1:2181(CONNECTED) 0] ls /
[zookeeper]
 
[zk: 127.0.0.1:2181(CONNECTED) 1] create /zk_test mydata
Created /zk_test
 
[zk: 127.0.0.1:2181(CONNECTED) 2] ls /
[zk_test, zookeeper]
 
[zk: 127.0.0.1:2181(CONNECTED) 3] get /zk_test
mydata
 
[zk: 127.0.0.1:2181(CONNECTED) 4] set /zk_test junk
[zk: 127.0.0.1:2181(CONNECTED) 5] get /zk_test
junk
 
[zk: 127.0.0.1:2181(CONNECTED) 6] delete /zk_test
[zk: 127.0.0.1:2181(CONNECTED) 7] ls /
[zookeeper]
 
[zk: 127.0.0.1:2181(CONNECTED) 8]

ClickHouse安装

yum安装

yum install yum-utils
rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/stable/x86_64
 
 
yum install clickhouse-server clickhouse-client

RPM方式离线安装

查看clickhouse源码tag,找出最近的长期维护的版本

在这里插入图片描述

官网下载如下包:
https://repo.yandex.ru/clickhouse/rpm/stable/x86_64/clickhouse-client-21.8.12.29-2.noarch.rpm 
https://repo.yandex.ru/clickhouse/rpm/stable/x86_64/clickhouse-common-static-21.8.12.29-2.x86_64.rpm
https://repo.yandex.ru/clickhouse/rpm/stable/x86_64/clickhouse-common-static-dbg-21.8.12.29-2.x86_64.rpm
https://repo.yandex.ru/clickhouse/rpm/stable/x86_64/clickhouse-server-21.8.12.29-2.noarch.rpm
https://repo.yandex.ru/clickhouse/rpm/stable/x86_64/clickhouse-test-21.8.12.29-2.noarch.rpm
安装clickhouse
rpm -ivh /soft/clickhouse/*.rpm

创建目录

centos-1 创建目录:
mkdir -p /data/clickhouse/{node1,node4}/{data,tmp,logs}
  
centos-2 创建目录:
mkdir -p /data/clickhouse/{node2,node5}/{data,tmp,logs}
  
centos-3 创建目录:
mkdir -p /data/clickhouse/{node3,node6}/{data,tmp,logs}

创建配置文件

配置clickhouse参数文件,账户文件,分片配置文件

node1配置文件如下

config.xml
<?xml version="1.0"?>
<yandex>
    <!--日志-->
    <logger>
        <level>warning</level>
        <log>/data/clickhouse/node1/logs/clickhouse.log</log>
        <errorlog>/data/clickhouse/node1/logs/error.log</errorlog>
        <size>500M</size>
        <count>5</count>
    </logger>
    <!--本地节点信息-->
    <http_port>8123</http_port>
    <tcp_port>9000</tcp_port>
    <interserver_http_port>9009</interserver_http_port>
    <interserver_http_host>centos-1</interserver_http_host>
    <!--本机域名或IP-->
    <!--本地配置-->
    <listen_host>0.0.0.0</listen_host>
    <max_connections>2048</max_connections>
    <keep_alive_timeout>3</keep_alive_timeout>
    <max_concurrent_queries>64</max_concurrent_queries>
    <uncompressed_cache_size>4294967296</uncompressed_cache_size>
    <mark_cache_size>5368709120</mark_cache_size>
    <path>/data/clickhouse/node1/</path>
    <tmp_path>/data/clickhouse/node1/tmp/</tmp_path>
    <users_config>/data/clickhouse/node1/users.xml</users_config>
    <access_control_path>/data/clickhouse/node1/access/</access_control_path>
    <default_profile>default</default_profile>
    <query_log>
        <database>system</database>
        <table>query_log</table>
        <partition_by>toMonday(event_date)</partition_by>
        <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </query_log>
    <query_thread_log>
        <database>system</database>
        <table>query_thread_log</table>
        <partition_by>toMonday(event_date)</partition_by>
        <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </query_thread_log>
    <prometheus>
        <endpoint>/metrics</endpoint>
        <port>8001</port>
        <metrics>true</metrics>
        <events>true</events>
        <asynchronous_metrics>true</asynchronous_metrics>
    </prometheus>
    <default_database>default</default_database>
    <timezone>Asia/Shanghai</timezone>
    <!--集群相关配置-->
    <remote_servers incl="clickhouse_remote_servers" />
    <zookeeper incl="zookeeper-servers" optional="true" />
    <macros incl="macros" optional="true" />
    <builtin_dictionaries_reload_interval>3600</builtin_dictionaries_reload_interval>
    <max_session_timeout>3600</max_session_timeout>
    <default_session_timeout>300</default_session_timeout>
    <max_table_size_to_drop>0</max_table_size_to_drop>
    <merge_tree>
        <parts_to_delay_insert>300</parts_to_delay_insert>
        <parts_to_throw_insert>600</parts_to_throw_insert>
        <max_delay_to_insert>2</max_delay_to_insert>
    </merge_tree>
    <max_table_size_to_drop>0</max_table_size_to_drop>
    <max_partition_size_to_drop>0</max_partition_size_to_drop>
    <distributed_ddl>
        <!-- Path in ZooKeeper to queue with DDL queries -->
        <path>/clickhouse/task_queue/ddl</path>
    </distributed_ddl>
    <include_from>/data/clickhouse/node1/metrika.xml</include_from>
</yandex>

users.xml
<?xml version="1.0"?>
<yandex>
    <profiles>
        <default>
            <!-- 请根据自己机器实际内存配置 -->
            <max_memory_usage>54975581388</max_memory_usage>
            <max_memory_usage_for_all_queries>61847529062</max_memory_usage_for_all_queries>
            <max_bytes_before_external_group_by>21474836480</max_bytes_before_external_group_by>
            <max_bytes_before_external_sort>21474836480</max_bytes_before_external_sort>
            <use_uncompressed_cache>0</use_uncompressed_cache>
            <load_balancing>random</load_balancing>
            <distributed_aggregation_memory_efficient>1</distributed_aggregation_memory_efficient>
            <max_threads>8</max_threads>
            <log_queries>1</log_queries>
        </default>
        <readonly>
            <max_threads>8</max_threads>
            <max_memory_usage>54975581388</max_memory_usage>
            <max_memory_usage_for_all_queries>61847529062</max_memory_usage_for_all_queries>
            <max_bytes_before_external_group_by>21474836480</max_bytes_before_external_group_by>
            <max_bytes_before_external_sort>21474836480</max_bytes_before_external_sort>
            <use_uncompressed_cache>0</use_uncompressed_cache>
            <load_balancing>random</load_balancing>
            <readonly>1</readonly>
            <distributed_aggregation_memory_efficient>1</distributed_aggregation_memory_efficient>
            <log_queries>1</log_queries>
        </readonly>
    </profiles>
    <quotas>
        <default>
            <interval>
                <duration>3600</duration>
                <queries>0</queries>
                <errors>0</errors>
                <result_rows>0</result_rows>
                <read_rows>0</read_rows>
                <execution_time>0</execution_time>
            </interval>
        </default>
    </quotas>
    <users>
        <default>
            <password_sha256_hex>328afba155145b9ab4dadc378a38f830107b1c48ae64a37690b163408b47c52c</password_sha256_hex>
            <access_management>1</access_management>
            <networks>
                <ip>::/0</ip>
            </networks>
            <profile>default</profile>
            <quota>default</quota>
        </default>
        <ch_ro>
            <password_sha256_hex>328afba155145b9ab4dadc378a38f830107b1c48ae64a37690b163408b47c52c</password_sha256_hex>
            <networks>
                <ip>::/0</ip>
            </networks>
            <profile>readonly</profile>
            <quota>default</quota>
        </ch_ro>
    </users>
</yandex>

metrika.xml
<?xml version="1.0"?>
<yandex>
    <!--ck集群节点-->
    <clickhouse_remote_servers>
        <ch_cluster_all>
            <!--分片1-->
            <shard>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>centos-1</host>
                    <port>9000</port>
                    <user>default</user>
                    <password>t5P9rh5M</password>
                </replica>
                <!--复制集1-->
                <replica>
                    <host>centos-3</host>
                    <port>9002</port>
                    <user>default</user>
                    <password>t5P9rh5M</password>
                </replica>
            </shard>
            <!--分片2-->
            <shard>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>centos-2</host>
                    <port>9000</port>
                    <user>default</user>
                    <password>t5P9rh5M</password>
                </replica>
                <!--复制集2-->
                <replica>
                    <host>centos-1</host>
                    <port>9002</port>
                    <user>default</user>
                    <password>t5P9rh5M</password>
                </replica>
            </shard>
            <!--分片3-->
            <shard>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>centos-3</host>
                    <port>9000</port>
                    <user>default</user>
                    <password>t5P9rh5M</password>
                </replica>
                <!--复制集3-->
                <replica>
                    <host>centos-2</host>
                    <port>9002</port>
                    <user>default</user>
                    <password>t5P9rh5M</password>
                </replica>
            </shard>
        </ch_cluster_all>
    </clickhouse_remote_servers>
    <!--zookeeper相关配置-->
    <zookeeper-servers>
        <node index="1">
            <host>centos-1</host>
            <port>2181</port>
        </node>
        <node index&#
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值