设置服务器时区
1. timedatectl set-timezone Asia/Shanghai
2. timedatectl set-ntp yes
3. timedatectl status
local time与本地时间一致,time zone与本地时区一致,即可。
修改yum源
1. 阿里源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
或者
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
2.清华源
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client. You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#
[base]
name=CentOS-$releasever - Base
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#released updates
[updates]
name=CentOS-$releasever - Updates
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/centosplus/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
3. yum clean all # 清除系统所有的yum缓存
yum makecache # 生成yum缓存
配置网络服务
1. sudo vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
修改onboot=yes
2. service network restart
这里配了两个网卡,一个是连主机,一个是连外网,需要修改ifcfg-enp0s3 ifcfg-enp0s8这两个文件,ONBOOT=yes,只改一个文件的话会影响网络使用【我遇到的问题是连不了外网】。
sudo vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
修改BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.56.10
多台虚拟机只要将ipaddr修改一下就可以了
复制虚拟机,右键选择复制,做如下修改。开机后,修改/etc/hostname和/etc/hosts
安装java
1. sudo yum install java-1.8.0-openjdk* -y
2. vim /etc/profile
export JAVA_HOME=/usr/lib/jvm/java
export PATH=$PATH:$JAVA_HOME/bin
3. source /etc/profile
安装docker
1. sudo yum install docker -y【或者通过此命令也可以
curl -sSL https://get.daocloud.io/docker | sh
】
2.docker pull centos
3. docker run -it centos /bin/bash
4. 设置开机启动
systemctl start docker
systemctl enable docker
Docker 官方中国区
https://registry.docker-cn.com
网易
http://hub-mirror.c.163.com
中国科技大学
https://docker.mirrors.ustc.edu.cn
阿里云
https://pee6w651.mirror.aliyuncs.com
解决 :
进入/etc/docker查看有没有 daemon.json,如果没有新建,有则修改。
{"registry-mirrors":
["https://reg-mirror.qiniu.com/",
"https://hub-mirror.c.163.com/",
"https://docker.mirrors.ustc.edu.cn/"]
}
vi /etc/docker/daemon.json
{
"registry-mirrors":["https://pee6w651.mirror.aliyuncs.com"]
}
重启docker服务
[root@localhost etc]# systemctl daemon-reload
[root@localhost etc]# systemctl restart docker
安装pip
1. 安装pip【如果有自动略过】
yum install epel-release
yum install -y python-pip
安装docker-compose
1. 下载docker-compose二进制安装文件
sudo curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
2. 添加执行权限
sudo chmod +x /usr/local/bin/docker-compose
安装redis
1. docker pull redis
2. docker run -d --name redis -p 6379:6379 redis
redis数据挂载
1. 在宿主机上创建redis相关的数据以及配置文件的存储目录 /opt/docker_redis/conf 和data
2. 复制一份redis.conf文件到宿主机的 /opt/docker_redis/conf/
3. “appendonly”的值配置成“yes”
开启redis访问密码:requirepass qweqwe
需要配置redis.conf中的“bind”参数,值为redis容器的IP
sudo docker inspect 容器id【前两位】| grep -i add
可以查看容器的ip
vi /etc/sysctl.conf
添加如下代码:
net.ipv4.ip_forward=1
重启network服务
# systemctl restart network
查看是否修改成功
# sysctl net.ipv4.ip_forward
如果返回为“net.ipv4.ip_forward = 1”则表示成功了
这时,重启容器即可。
systemctl restart docker
4. sudo docker run --privileged=true --name redis -p 6379:6379 -v /opt/docker_redis/conf/redis.conf:/usr/local/etc/redis/redis.conf -v /opt/docker_redis/data:/data -d redis
有时候上面的命令会报错,解析错误,可以用下面的命令,主要是换行符或者是tab空格的问题
sudo docker run --name redis -p 6379:6379 \ -v /opt/docker_redis/conf/redis.conf:/usr/local/etc/redis/redis.conf \ -v /opt/docker_redis/data:/data \ -d redis
配置主节点,使用sudo docker run --restart=always --privileged=true --name redis -p 6379:6379 -v /opt/docker_redis/conf/redis.conf:/usr/local/etc/redis/redis.conf -v /opt/docker_redis/data:/data -d redis
配置从的时候,使用【从节点不要挂载数据,否则info replication命令查看,主节点的状态始终是master_link_status:down】sudo docker run --name redis -p 6379:6379 -v /opt/docker_redis/conf/redis.conf:/usr/local/etc/redis/redis.conf -d redis --replicaof 192.168.56.10 6379
需要加入--privileged=true,提升容器的权限
--restart=always 设置容器自动重启
docker-compose的一个例子[单机多实例,多机器的时候,将redis-master改成主节点的ip即可,也不需要link redis-master]
version: '2'
services:
redis-master:
image: redis
container_name: redis-master
ports:
- "7010:6379"
redis-slave:
image: redis
container_name: redis-slave
ports:
- "7011:6379"
command: redis-server --replicaof redis-master 6379
links:
- redis-master:redis-master
docker安装mysql5.7主从
1. 拉取
docker pull mysql:5.7
2. 配置文件、数据映射到宿主机
docker run -p 3306:3306 --restart=always --privileged=true --name mysql-master -v /opt/docker_data/mysql/log:/var/log/mysql -v /opt/docker_data/mysql/data:/var/lib/mysql -v /opt/docker_data/mysql/conf:/etc/mysql -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.7
或者docker-compose.yml文件【配置文件要具体到文件】
version: '3'
services:
mysql-master:
image: mysql:5.7
container_name: mysql-master
restart: always
ports:
- "3306:3306"
privileged: true
environment:
MYSQL_ROOT_PASSWORD: 123456
TZ: Asia/Shanghai
volumes:
- /opt/docker_data/mysql/log:/var/log/mysql
- /opt/docker_data/mysql/data:/var/lib/mysql
- /opt/docker_data/mysql/conf/my.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf
3. 放行端口
firewall-cmd --add-port=3306/tcp --permanent
4. 使用远程连接mysql失败,报2059 authentication plugin caching_sha2_password cannot be loaded错误,进入docker容器,修改root密码
alter user 'root' identified with mysql_native_password by '123456';
5. 创建用户
主节点配置my.conf
[client]
default-character-set=utf8mb4
[mysql]
default-character-set=utf8mb4
[mysqld]
init_connect='SET collation_connection=utf8_unicode_ci'
init_connect='SET NAMES utf8mb4'
character-set-server=utf8mb4
collation-server=utf8mb4_unicode_ci
skip-character-set-client-handshake
skip-name-resolve
# --- 以下是跟数据库主从同步有关的配置--- #
server-id=1
# 控制数据库的binlog刷到磁盘上去 , 0 不控制,性能最好,1每次事物提交都会刷到日志文件中,性能最差,最安全
sync_binlog = 1
#开启及设置二进制日志文件名称
log_bin = mysql-bin
# binlog日志格式,mysql默认采用statement,建议使用mixed
binlog_format = mixed
# binlog过期清理时间
expire_logs_days = 7
# binlog每个日志文件大小
max_binlog_size = 100m
# binlog缓存大小
binlog_cache_size = 4m
# 最大binlog缓存大
max_binlog_cache_size= 512m
# 指定要同步的数据库
binlog-do-db=db1
# 不需要同步的数据库,多个忽略数据库可以用逗号拼接,或者复制这句话,写多行
binlog-ignore-db=mysql
binlog-ignore-db=sys
binlog-ignore-db=information-schema
binlog-ignore-db=performance_schema
# 跳过从库错误
slave-skip-errors = all
6.重启mysql容器,配置主从同步用户
创建一个用来同步的账号,并赋予权限
create user 'backup'@'%' identified by '123456';
GRANT REPLICATION SLAVE ON *.* to 'backup'@'%' identified by '123456';
从节点my.conf配置
[client]
default-character-set=utf8mb4
[mysql]
default-character-set=utf8mb4
[mysqld]
init_connect='SET collation_connection=utf8_unicode_ci'
init_connect='SET NAMES utf8mb4'
character-set-server=utf8mb4
collation-server=utf8mb4_unicode_ci
skip-character-set-client-handshake
skip-name-resolve
# 设置从数据库的节点
server-id=2
7. 登录从节点mysql容器,设置mysql-slave的主库连接
mysql> change master to master_host='192.168.221.7',master_user='backup',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=0,master_port=3306;
master_host='192.168.221.7':主数据库服务器的ip
master_user='backup':主数据库服务器的用户
master_password='123456':主数据库服务器用户backup用户的密码
master_port=3307:主数据库服务器的端口
mysql> start slave;
mysql> show slave status\G; |
8. 【这一步用不到】配置从库报错Slave is not configured or failed to initialize properly. You must at least set --server-id to enabl。。。
use mysql;
drop table slave_master_info;
drop table slave_relay_log_info;
drop table slave_worker_info;
drop table innodb_index_stats;
drop table innodb_table_stats;
source /usr/local/mysql/share/mysql_system_tables.sql
docker安装zookeeper集群
1. docker pull zookeeper:3.4
2. 在/opt/docker_data/zookeeper下创建 conf、data、datalog三个文件夹
3. vim conf/zoo.cfg
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=5
syncLimit=2
autopurge.snapRetainCount=3
autopurge.purgeInterval=0
maxClientCnxns=60
standaloneEnabled=true
admin.enableServer=false
clientPort=2181
server.1=172.25.10.89:2888:3888
server.2=172.25.11.126:2888:3888
server.3=172.25.10.126:2888:3888
4. 在每个主机下,将server.1后面的数字写入
echo 1 > data/myid
5. 启动容器
docker run -d --network=host --privileged=true -v /opt/docker_data/zookeeper/data:/data -v /opt/docker_data/zookeeper/conf:/conf -v /opt/docker_data/zookeeper/datalog:/datalog --name zk1 --restart=always zookeeper:3.4
6. 进入容器
docker exec -it zk1 bash
7. 启动zk
zkServer.sh start
8. 等到三台机器都启动成功后,执行
zkServer.sh status
可以查看每个节点的主从状态
9. 遇到的问题:
9.1 docker启动容器后,有一个节点始终连不上集群,其余的两个都正常:因为这台机器开了防火墙,将2181、2888、3888这三个端口都添加进去,然后重启容器,即可
firewall-cmd --add-port=3888/tcp --permanent
firewall-cmd --reload
zkServer.sh restart
【
systemctl status firewalld 查看防火墙状态
yum install iptables-services
firewall-cmd --list-ports
】
&有问题&. container_name,ZOO_MY_ID修改一下,docker-compose.yml
version: "3"
services:
zookeeper1:
container_name: zookeeper3
image: zookeeper
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=172.25.10.89:2888:3888 server.2=172.25.11.126:2888:3888 server.3=172.25.10.126:2888:3888
restart: always
privileged: true
# network: host
volumes:
- /opt/docker-data/zookeeper/data:/data
- /opt/docker-data/zookeeper/datalog:/datalog
networks:
docker-net: # 这个需要自己创建,或者使用上面的network:host
external: true
docker-compose up -d启动
物理机安装类似,
物理机安装的好处:日志清晰,最开始用docker安装的时候,日志没有报错信息,导致自己认为启动都正常,所以一直认为是配置文件的问题,耽误了很多时间。后来改成物理机部署,log中明确的记录了是节点连接被refuse了,所以才想到是防火墙的问题【中间耽误了很多时间,所以以后部署还是优先考虑物理机部署】。
修改centos7时区
1. 查看时间和时区 timedatectl status
2. 删除本地时区设置 rm /etc/localtime
3. 设置utc时区 ln -s /usr/share/zoneinfo/Universal /etc/localtime
4. 再次验证时间和时区状态 timedatectl status
centos7部署zookeeper + kafka集群
1. 创建文件夹,创建docker-compose.yml文件,文件中不要使用tab键,否则运行时候会报错,格式错误
version: '3'
services:
zk3: # 自定义,我这里是三个节点就用后缀1 2 3区分了
image: zookeeper:3.4
restart: always
container_name: zk3 # 自定义
ports:
- "2181:2181"
network_mode: host # dockers网络,用来网络隔离,可以自定义
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=172.25.10.89:2888:3888 server.2=172.25.11.126:2888:3888 server.3=172.25.10.126:2888:3888
privileged: true # 给容器赋予访问物理机目录的root权限
volumes:
- /opt/docker_data/data_translation/zk/data:/data
- /opt/docker_data/data_translation/zk/datalog:/datalog
kafka3:
depends_on:
- zk3
image: wurstmeister/kafka:2.13-2.8.1
restart: always
container_name: kafka3
hostname: kafka3
ports:
- "9092:9092"
network_mode: host
environment:
KAFKA_BROKER_ID: 3
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.25.10.126:9092 ## 脣脰禄煤 KAFKA_ADVERTISED_HOST_NAME: kafka1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: "172.25.10.89:2181,172.25.11.126:2181,172.25.10.126:2181"
privileged: true
volumes:
- "/opt/docker_data/data_translation/kafka/data:/kafka"
version: '3'
services:
kafka1:
image: wurstmeister/kafka:2.13-2.8.1
restart: always
container_name: kafka1
hostname: kafka1
ports:
- "9092:9092"
network_mode: host
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.25.10.89:9092 ## 脣脰禄煤 KAFKA_ADVERTISED_HOST_NAME: kafka1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: "172.25.10.89:2181,172.25.10.126:2181,172.25.11.126:2181"
privileged: true # 如果节点机器开启了防火墙,可能会报错,没有权限java.nio.file.AccessDeniedException: /kafka/kafka-logs-kafka1/.lock,加上这个配置表示允许特权,即可解决
volumes:
- "/opt/docker_data/kafka/data:/kafka"
2. docker-compose up -d
docker-compse ps 或者 docker ps 查看容器启动状态
3. 查看日志 docker logs kafka1
4. 验证集群状态
/bin/kafktopics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 3 --topic testhello
--replication-factor 3 我这里是三个节点,设置副本数为3,建议不要超过broker数量
--partitions 3 分区数,一个topic中的内容分几个区存储, topic在kafka中是一个逻辑概念,一个分区是最小的存储单元,存储一个主题中的消息的子集。每个分区都是一个日志文件,消息以只追加的方式写入其中【也就顺序写,效率很高】。
5. 列出已创建的topic列表,如果是连接集群的话,这里zk的地址可以设置为其它节点的地址
kafka-topics.sh --list --zookeeper localhost:2181
6. 模拟客户端去发送消息
kafka-console-producer.sh --broker-list localhost:9092,kafka2:9092,kafka3:9092 --topic testhello
7. 模拟客户端去接收消息
kafka-console-consumer.sh --zookeeper zk1:2181,zk2:2181,zk3:2181 --from-beginning --topic testhello
本地单机运行 docker run -d --name kafka1 -p 9093:9093 -e KAFKA_BROKER_ID=1 -v /opt/docker_data/zk_kafka/kafka/data:/kafka -e KAFKA_ZOOKEEPER_CONNECT=localhost:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9093 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093 -t wurstmeister/kafka:wurstmeister/kafka:2.13-2.8.1
部署elasticsearch
1. 配置docker-compose文件
version: '3'
services:
es1:
image: elasticsearch:7.7.1
container_name: es1
environment:
- node.name=es1
- cluster.name=es-docker-cluster
- network.publish_host=172.25.10.89
- discovery.seed_hosts=172.25.11.126,172.25.10.126
- cluster.initial_master_nodes=172.25.10.89,172.25.11.126,172.25.10.126
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/docker_data/elk/es/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /opt/docker_data/elk/es/data:/usr/share/elasticsearch/data
#- /opt/docker_data/elk/es/logs:/usr/share/elasticsearch/logs
# - /opt/docker_data/elk/es/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- 9200:9200
- 9300:9300
privileged: true
network_mode: host
2. 对于挂载的文件夹,需要赋予权限 chmod 777 -R data chmod 777 -R logs
3. 修改系统虚拟内存配置 vi /etc/sysctl.conf
vm.max_map_count = 262144
4. 加载配置 sysctl -p
5. 启动 docker-compose up -d
部署kibana
1. docker-compose.yml
version: '3'
services:
kibana:
image: docker.elastic.co/kibana/kibana:7.7.1
container_name: kibana
restart: always
environment:
- I18N_LOCALE=zh-CN
ports:
- "5601:5601"
privileged: true
volumes:
- /opt/docker_data/elk/kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml
links:
- es1:elasticsearch
depends_on:
- es1
2. kibana.yml配置文件
# kibana碌脛梅脴路 0.0.0.0驴杀铆录脿脣脫IP
server.host: "0.0.0.0"
# kibana路脙脢s碌脛RL
elasticsearch.hosts: ["http://172.25.10.89:9200","http://172.25.11.126:9200","http://172.25.10.126:9200"]
elasticsearch.username: 'kibana'
elasticsearch.password: '123456'
# 脧示碌脟陆页脙
xpack.monitoring.ui.container.elasticsearch.enabled: true
3. 启动docker-compse up -d
4. 这里链接了3个es节点,如果某个节点没有启动的话,kibana或启动报错
部署logstash
1. docker-compose.yml文件
version: '3'
services:
logstash:
image: logstash:7.7.1
container_name: logstash1
ports:
- 5000:5000
volumes:
- type: bind
source: /opt/docker_data/elk/logstash/pipeline/
target: /usr/share/logstash/pipeline
read_only: true
network_mode: host
2. 在pipeline文件夹下创建ports.conf
input {
tcp {
port => 5000
}
}
output {
elasticsearch {
hosts => ["172.25.10.89:9200","172.25.11.126:9200","172.25.10.126:9200"]
index => "hello-logstash-docker"
}
}
这里简单设置,通过tcp端口5000,获取消息,没有设置过滤策略,只是调试联通。这里连接了三个es节点,如果有节点没有启动,这里启动会报错。
3. 启动docker-compse up -d