案例分析Harbor私有仓库高可用部署
1. 规划节点
节点规划,见表1。
表1 节点规划
IP | 主机名 | 节点 |
---|---|---|
192.168.100.3 | harbor1 | Harbor、Docker、docker-compose、Nginx、Keepalived、NFS |
192.168.100.4 | harbor2 | Harbor、Docker、docker-compose、Nginx、Keepalived、NFS |
192.168.100.5 | harbor-data | NFS、PostgreSQL、Redis |
192.168.100.100 | / | 负载均衡VIP |
2. 基础准备
修改主机名
[root@harbor1 ~]# hostnamectl set-hostname harbor1
[root@harbor1 ~]# bash
[root@harbor2 ~]# hostnamectl set-hostname harbor2
[root@harbor2 ~]# bash
[root@harbor-data ~]# hostnamectl set-hostname harbor-data
[root@harbor-data ~]# bash
配置hosts解析
[root@harbor1 ~]# vi /etc/hosts
192.168.100.3 harbor1
192.168.100.4 harbor2
192.168.100.5 harbor-data
192.168.100.100 harbor.gxl.com
配置免密
[root@harbor1 ~]# ssh-keygen
[root@harbor1 ~]# ssh-copy-id harbor1
[root@harbor1 ~]# ssh-copy-id harbor2
[root@harbor1 ~]# ssh-copy-id harbor-data
设置防火墙和selinux开机不自启
[root@harbor1 ~]# systemctl disable firewalld --now && setenforce 0
永久关闭selinux
[root@harbor1 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
配置阿里云YUM源
[root@harbor1 ~]# rm -rf /etc/yum.repos.d/CentOS-*
[root@harbor1 ~]# curl -o /etc/yum.repos.d/centos.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@harbor1 ~]# curl -o /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
[root@harbor1 ~]# curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装工具
[root@harbor1 ~]# yum install -y vim net-tools lrzsz lsof bash-com*
案例实施
1. 部署Docker环境
(1)安装依赖项
[root@harbor1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
(2)安装docker-ce
[root@harbor1 ~]# yum install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
设置Docker开机自启
[root@harbor1 ~]# systemctl enable docker --now
查看Docker的版本信息
[root@harbor1 ~]# docker version
Client: Docker Engine - Community
Version: 26.1.4
API version: 1.45
Go version: go1.21.11
Git commit: 5650f9b
Built: Wed Jun 5 11:32:04 2024
OS/Arch: linux/amd64
Context: default
Server: Docker Engine - Community
Engine:
Version: 26.1.4
API version: 1.45 (minimum version 1.24)
Go version: go1.21.11
Git commit: de5c9cf
Built: Wed Jun 5 11:31:02 2024
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.33
GitCommit: d2d58213f83a351ca8f528a95fbd145f5654e957
runc:
Version: 1.1.12
GitCommit: v1.1.12-0-g51d5e94
docker-init:
Version: 0.19.0
GitCommit: de40ad0
配置Docker镜像加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json << 'EOF'
{
"insecure-registries": ["harbor.gxl.com"],
"registry-mirrors": [
"https://docker.aityp.com",
"https://docker.m.daocloud.io",
"https://reg-mirror.qiniu.com",
"https://k8s.m.daocloud.io",
"https://elastic.m.daocloud.io",
"https://gcr.m.daocloud.io",
"https://ghcr.m.daocloud.io",
"https://k8s-gcr.m.daocloud.io",
"https://mcr.m.daocloud.io",
"https://nvcr.m.daocloud.io",
"https://quay.m.daocloud.io",
"https://jujucharms.m.daocloud.io",
"https://rocks-canonical.m.daocloud.io",
"https://d3p1s1ji.mirror.aliyuncs.com"
],
"exec-opts": [
"native.cgroupdriver=systemd"
],
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 5,
"log-opts": {
"max-size": "300m",
"max-file": "2"
},
"live-restore": true,
"log-level": "debug"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
(3)安装docker-compose
从 Docker 20.10.0 版本开始,Docker Compose 已经被集成到 Docker CLI(命令行界面)中
如果版本过低,请使用以下方式安装 docker-compose,下载地址
[root@harbor1 ~]# wget https://github.com/docker/compose/releases/download/v2.10.2/docker-compose-linux-x86_64
[root@harbor1 ~]# mv docker-compose-v2.10.2 /usr/local/bin/docker-compose
[root@harbor1 ~]# chmod +x /usr/local/bin/docker-compose
查看docker-compose版本
[root@harbor1 ~]# docker-compose version
Docker Compose version v2.10.2
(4)优化系统内核参数
[root@harbor1 ~]# cat > /etc/sysctl.d/harbor.conf << EOF
# 允许桥接设备传递 IPv6 流量,并对它们应用 netfilter 规则。
# 这通常用于需要桥接 IPv6 流量的容器环境,例如 Docker。
net.bridge.bridge-nf-call-ip6tables = 1
# 允许桥接设备传递 IPv4 流量,并对它们应用 netfilter 规则。
# 这使得在桥接网络上可以使用 iptables 进行包过滤。
net.bridge.bridge-nf-call-iptables = 1
# 启用 IP 路由转发功能。
# 这使得系统能够像路由器一样转发网络流量。
net.ipv4.ip_forward = 1
EOF
加载配置文件
[root@harbor1 ~]# sysctl -p /etc/sysctl.d/harbor.conf
加载br_netfilter模块
# 由于开启bridge功能,需要加载br_netfilter模块来允许在bridge设备上的数据包经过iptables防火墙处理。
[root@harbor1 ~]# modprobe br_netfilter && lsmod | grep br_netfilter
2. 部署NFS共享存储
(1)安装nfs服务
[root@harbor-data ~]# yum install -y nfs-utils rpcbind
设置nfs开机自启
[root@harbor-data ~]# systemctl enable nfs-server rpcbind --now
(2)创建共享目录
[root@harbor-data ~]# mkdir -p /data/harbor-data
(3)服务端配置nfs共享条目
# 只在nfs服务端配置即可
[root@harbor-data ~]# vim /etc/exports
/data/harbor-data/ 192.168.100.0/24(rw,sync,no_root_squash,no_all_squash)
重启nfs服务生效配置
[root@harbor-data ~]# systemctl restart nfs-server
[root@harbor-data ~]# exportfs -r
查看共享目录信息
[root@harbor-data ~]# showmount -e
Export list for harbor-data:
/data/harbor-data 192.168.100.0/24
(4)客户端配置nfs共享存储
[root@harbor1 ~]# vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Aug 23 05:31:14 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=5f416d26-0747-463d-ae8a-a079c130882b /boot xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
192.168.100.5:/data/harbor-data /data/harbor-data nfs defaults 0 0
挂载nfs的文件系统
[root@harbor1 ~]# mount -a
验证是否挂载成功
[root@harbor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 12M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/mapper/centos-root 46G 2.1G 44G 5% /
/dev/sda1 1014M 138M 877M 14% /boot
tmpfs 394M 0 394M 0% /run/user/0
192.168.100.5:/data/harbor-data 46G 2.0G 44G 5% /data/harbor-data
(5)验证共享存储
创建文件
[root@harbor1 ~]# touch /data/harbor-data/nfs.txt
MD5验证文件的完整性和一致性
# 客户端查看
[root@harbor1 ~]# md5sum /data/harbor-data/nfs.txt
d41d8cd98f00b204e9800998ecf8427e /data/harbor-data/nfs.txt
# 服务端查看
[root@harbor-data ~]# md5sum /data/harbor-data/nfs.txt
d41d8cd98f00b204e9800998ecf8427e /data/harbor-data/nfs.txt
3. 部署Redis缓存服务
在 harbor-data 部署 Redis缓存服务,为 harbor1 和 harbor2 实例提供外部 Redis缓存服务
(1)安装redis服务
[root@harbor-data ~]# yum install -y redis
设置redis开机自启
[root@harbor-data ~]# systemctl enable redis --now
(2)配置redis文件
[root@harbor-data ~]# vim /etc/redis.conf
...
# 修改第61行 允许任何主机连接
bind 127.0.0.1
# 修改为 bind 0.0.0.0 或者 注释掉61行 #bind 127.0.0.1
...
# 修改第128行 使redis可以使用守护进程方式启动
daemonize no
# 修改为 daemonize yes
...
# 修改第480行
# requirepass foobared
# 修改为 requirepass moshang 设置redis连接的auth密码(moshang)
(3)客户端连接Redis
由于客户端可以 redis-cli 命令可以通过两种方式获取
第一种:yum安装redis
[root@harbor1 ~]# yum install -y redis
第二种:scp远程复制
[root@harbor-data ~]# scp /usr/bin/redis-cli 192.168.100.3:/usr/bin/
连接测试通信
[root@harbor1 ~]# redis-cli -h 192.168.100.5 -p 6379 -a moshang
192.168.100.5:6379> exit
4. 部署PostgreSQL外部存储服务
(1)安装PostgreSQL服务
获取阿里云YUM源
[root@harbor-data ~]# rpm -Uvh https://mirrors.aliyun.com/postgresql/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
[root@harbor-data ~]# sed -i "s@https://download.postgresql.org/pub@https://mirrors.aliyun.com/postgresql@g" /etc/yum.repos.d/pgdg-redhat-all.repo
查看YUM源状态
[root@harbor-data ~]# yum repolist all | grep enable
安装postgresql15-server
[root@harbor-data ~]# yum -y install postgresql15-server
初始化数据库
[root@harbor-data ~]# postgresql-15-setup initdb
Initializing database ... OK
设置postgresql开机自启
[root@harbor-data ~]# systemctl enable postgresql-15 --now
(2)设置postgres数据库密码
[root@harbor-data ~]# su - postgres
Last login: Tue Oct 22 17:59:24 CST 2024 on pts/0
-bash-4.2$ psql
psql (15.8)
Type "help" for help.
postgres=# \password
Enter new password for user "postgres":
Enter it again:
postgres=# \q
(3)配置可远程登录PostgreSQL
修改配置文件
[root@harbor-data ~]# vim /var/lib/pgsql/15/data/postgresql.conf
...
# 修改第60行 监听所有地址
#listen_addresses = 'localhost' # what IP address(es) to listen on;
listen_addresses = '*' # what IP address(es) to listen on;
# TYPE DATABASE USER ADDRESS METHOD
[root@harbor-data ~]# vim /var/lib/pgsql/15/data/pg_hba.conf
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 0.0.0.0/0 trust
# IPv6 local connections:
host all all ::1/128 scram-sha-256
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all peer
host replication all 127.0.0.1/32 scram-sha-256
host replication all ::1/128 scram-sha-256
重新postgresql服务
[root@harbor-data ~]# systemctl restart postgresql-15.service
(4)创建数据库
[root@harbor-data ~]# psql -U postgres -h localhost
Password for user postgres:
psql (15.8)
Type "help" for help.
postgres=# create database registry;
CREATE DATABASE
postgres=# create database harbor_db;
CREATE DATABASE
postgres=# create database notary_signer;
CREATE DATABASE
postgres=# create database notary_servers;
CREATE DATABASE
5. 配置负载均衡服务
(1)安装Nginx服务
在 harbor1 和 harbor2 安装
nginx从1.9.0开始新增了steam模块,用来实现四层协议的转发、代理、负载均衡等。二进制安装的nginx则在./configure时添加–with-stream参数来安装stream模块。
[root@harbor1 ~]# yum install -y nginx keepalived nginx-all-modules
设置nginx开机自启
[root@harbor1 ~]# systemctl enable nginx --now
通过浏览器查看Nginx默认界面,如图所示:
图1
(2)配置Nginx文件
[root@harbor1 ~]# vim /etc/nginx/nginx.conf
# 设置运行 Nginx 的用户
user nginx;
# 自动设置工作进程数量,根据 CPU 核心数量
worker_processes auto; # 工作进程数量会根据服务器的 CPU 核心数自动调整
# 错误日志文件的路径
error_log /var/log/nginx/error.log;
# 存放 Nginx 进程 ID 的文件
pid /run/nginx.pid;
# 包含额外的 Nginx 模块配置
include /usr/share/nginx/modules/*.conf;
# 事件模块配置
events {
worker_connections 1024; # 每个工作进程最大连接数
}
# 四层负载均衡配置,用于将请求分发到多个 Harbor 实例
stream {
# 定义日志格式
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
# 访问日志文件的路径
access_log /var/log/nginx/harbor-access.log main;
# 定义上游服务器组
upstream harbor {
server 192.168.100.3:8080; # 第一台 Harbor 服务器
server 192.168.100.4:8080; # 第二台 Harbor 服务器
}
server {
listen 80; # Nginx 监听的端口
proxy_pass harbor; # 将请求转发到上游服务器组
}
}
# HTTP 模块配置
http {
# 定义访问日志的格式
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# 访问日志文件的路径
access_log /var/log/nginx/access.log main;
sendfile on; # 启用高效文件传输
tcp_nopush on; # 启用 TCP_NOPUSH 以提高性能
tcp_nodelay on; # 启用 TCP_NODELAY 以降低延迟
keepalive_timeout 65; # 设置长连接超时时间
types_hash_max_size 2048; # MIME 类型哈希表的最大大小
# 包含 MIME 类型配置
include /etc/nginx/mime.types;
# 默认 MIME 类型
default_type application/octet-stream;
server {
listen 81 default_server; # Nginx 默认监听端口 81
server_name _; # 通配符服务器名称,匹配所有请求
location / {
# 根目录的请求处理配置(目前为空)
}
}
}
通过命令检查配置是否正确并重启
[root@harbor1 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@harbor1 ~]# systemctl restart nginx
(2)安装keepalived服务
[root@harbor1 ~]# yum install -y keepalived
设置keepalived开机自启
[root@harbor1 ~]# systemctl enable keepalived --now
(3)配置keepalived文件
harbor1节点
[root@harbor1 harbor]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
# 全局唯一的主机标识
router_id master
}
vrrp_script check_nginx {
script "/usr/local/src/check_nginx.sh" # 检测nginx状态脚本
interval 3 # 每隔3秒运行一次Shell脚本
weight 10 # 脚本运行成功,权重加10
}
vrrp_instance VI_1 { # vrrp配置标识 VI-1为实例名称
# 标识是主节点还是备用节点,值为 MASTER 或 BACKUP
state MASTER
# 绑定的网卡,通过vrrp协议去通信、广播
interface ens33
# 定义虚拟路由组id,保证主备节点是一致的
virtual_router_id 51
# 权重,master必须比slave大
priority 100
# master和backup同步检查时间,间隔默认1秒
advert_int 1
# 认证授权的密码,所有主备需要一样
authentication {
# 认证类型
auth_type PASS
# 字符串
auth_pass 1111
}
# 当Keepalived检测到主备节点之间的状态发生变化时,它会执行track_script中指定的脚本
track_script {
check_nginx
}
# 虚拟IP(VIP),又叫漂移IP
virtual_ipaddress {
192.168.100.100
}
}
harbor2节点
[root@harbor2 harbor]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
# 全局唯一的主机标识
router_id slave
}
vrrp_script check_nginx {
script "/usr/local/src/check_nginx.sh"
interval 3 # 每隔3秒运行一次Shell脚本
weight 10 # 脚本运行成功,权重加10
}
vrrp_instance VI_1 {
# 标识是主节点还是备用节点,值为 MASTER 或 BACKUP
state BACKUP
# 绑定的网卡
interface ens33
# 虚拟路由id,保证主备节点是一致的
virtual_router_id 51
# 权重
priority 90
# 同步检查时间,间隔默认1秒
advert_int 1
# 认证授权的密码,所有主备需要一样
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
check_nginx
}
# 虚拟IP
virtual_ipaddress {
192.168.100.100
}
}
(4)配置健康检查脚本
[root@harbor1 ~]# vim /usr/local/src/check_nginx.sh
#!/bin/bash
# 获取当前时间
d=$(date +%Y%m%d%H:%M:%S)
# 检查nginx进程数量
n=$(ps -C nginx --no-headers | wc -l)
# 如果nginx进程数量为0,则启动nginx并再次检查
if [ $n -eq 0 ]; then
systemctl start nginx
sleep 3 # 等待nginx启动
# 如果nginx进程数量仍然为0,说明无法启动nginx,此时需要关闭keepalived
if [ $n -eq 0 ]; then
echo "$d Nginx is down, keepalived will stop." >> /var/log/check_ng.log
systemctl stop keepalived
fi
fi
添加执行权限
[root@harbor1 ~]# chmod +x /usr/local/src/check_nginx.sh
(5)测试VIP漂移
先在harbor1确认是否存在虚拟IP地址192.168.100.100
[root@harbor1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:8a:a4:af brd ff:ff:ff:ff:ff:ff
inet 192.168.100.3/24 brd 192.168.100.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.100.100/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::f350:5b34:c230:2205/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:0b:90:7d:63 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
关闭keepalived查看VIP是否会转移到harbor2
[root@harbor1 ~]# systemctl stop keepalived
[root@harbor1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:8a:a4:af brd ff:ff:ff:ff:ff:ff
inet 192.168.100.3/24 brd 192.168.100.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::f350:5b34:c230:2205/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:0b:90:7d:63 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
在harbor2查看是否已经漂移
[root@harbor2 harbor]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:fc:15:b4 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.4/24 brd 192.168.100.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.100.100/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::f350:5b34:c230:2205/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::2ecd:78a2:84f4:423b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:c1:2c:7e:9a brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
6.部署Harbor私有仓库
(1)解压harbor安装包
[root@harbor1 ~]# tar xf harbor-offline-installer-v2.5.3.tgz -C /opt/
(2)配置HTTPS证书
生成 CA 证书私有密钥
[root@harbor1 ~]# mkdir -p /opt/harbor/certs
[root@harbor1 ~]# cd /opt/harbor/certs/
[root@harbor1 certs]# openssl genrsa -out ca.key 4096
生成 CA 证书颁发机构
[root@harbor1 certs]# openssl req -x509 -new -nodes -sha512 -days 3650 \
-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=harbor.gxl.com" \
-key ca.key \
-out ca.crt
# 参数说明:
## C,Country:代表国家
## ST,STate:代表省份
## L,Location:代表城市
## O,Organization:代表组织,公司
## OU,Organization Unit:代表部门
## CN,Common Name:代表服务器域名
## emailAddress:代表联系人邮箱地址
生成服务器私钥
[root@harbor1 certs]# openssl genrsa -out harbor.gxl.com.key 4096
生成证书签名请求(CSR)
[root@harbor1 certs]# openssl req -sha512 -new -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=harbor.gxl.com" -key harbor.gxl.com.key -out harbor.gxl.com.csr
生成 x509 v3 扩展文件
[root@harbor1 certs]# vim v3.ext
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1=harbor.gxl.com
DNS.2=harbor.gxl.net
IP.1=192.168.100.100
使用 v3.ext 文件为 Harbor 服务器生成证书
[root@harbor1 certs]# openssl x509 -req -sha512 -days 3650 -extfile v3.ext -CA ca.crt -CAkey ca.key -CAcreateserial -in harbor.gxl.com.csr -out harbor.gxl.com.crt
证书转换
# 将 harbor.gxl.com.crt 转换为 harbor.gxl.com.cert,Docker守护进程将 .crt 文件解释为 CA 证书 .cert 将文件解释为客户端证书
[root@harbor1 certs]# openssl x509 -inform PEM -in harbor.gxl.com.crt -out harbor.gxl.com.cert
(3)配置harbor文件
[root@harbor1 ~]# vim /opt/harbor/harbor.yml
# Harbor 服务器的主机名或 IP 地址
hostname: harbor.gxl.com
# HTTP 协议的端口号,80 是默认设置
http:
port: 8080
# HTTPS 协议的端口号,默认是 443
https:
port: 443
# Nginx 使用的 SSL 证书文件路径
certificate: /opt/harbor/certs/harbor.gxl.com.crt
# Nginx 使用的 SSL 私钥文件路径
private_key: /opt/harbor/certs/harbor.gxl.com.key
# Harbor 控制台的管理员密码
harbor_admin_password: Harbor12345
# 数据库连接配置
database:
# 连接数据库时使用的密码
password: moshang
# 数据库最大空闲连接数
max_idle_conns: 100
# 数据库最大打开连接数
max_open_conns: 900
# 连接最大生命周期,超时后会被关闭
conn_max_lifetime: 5m
# 空闲连接的最大存活时间
conn_max_idle_time: 0
# Harbor 数据存储的卷路径
data_volume: /data/harbor_data/
# Trivy 安全扫描配置
trivy:
# 是否忽略未修复的漏洞
ignore_unfixed: false
# 是否跳过更新
skip_update: false
# 是否跳过 Java 数据库更新
skip_java_db_update: false
# 是否进行离线扫描
offline_scan: false
# 安全检查类型,默认是漏洞检查
security_check: vuln
# 是否允许不安全的连接
insecure: false
# 扫描超时时间设置
timeout: 5m0s
# 作业服务配置
jobservice:
# 最大工作线程数
max_job_workers: 10
# 日志记录方式
job_loggers:
- STD_OUTPUT
- FILE
# 日志清理周期,单位是天
logger_sweeper_duration: 1
# 通知配置
notification:
# Webhook 任务最大重试次数
webhook_job_max_retry: 3
# Webhook HTTP 客户端超时时间,单位是秒
webhook_job_http_client_timeout: 3
# 日志配置
log:
# 日志记录级别
level: info
# 本地日志配置
local:
# 日志轮转的数量
rotate_count: 50
# 单个日志文件的最大大小
rotate_size: 200M
# 日志文件存放路径
location: /var/log/harbor
# Harbor 版本号
_version: 2.5.0
# 外部数据库配置,取消注释以启用
external_database:
# Harbor 数据库配置
harbor:
# PostgreSQL 数据库的 IP 地址
host: 192.168.100.5
# PostgreSQL 数据库端口
port: 5432
# 要连接的数据库名称
db_name: registry
# 连接数据库的用户名
username: postgres
# 连接数据库的密码
password: moshang
# SSL 模式,禁用 SSL 连接
ssl_mode: disable
# 最大空闲连接数
max_idle_conns: 2
# 最大打开连接数
max_open_conns: 0
# Notary 签名服务数据库配置
notary_signer:
host: 192.168.100.5
port: 5432
db_name: notary_signer
username: postgres
password: moshang
ssl_mode: disable
max_idle_conns: 2
max_open_conns: 0
# Notary 服务器数据库配置
notary_server:
host: 192.168.100.5
port: 5432
db_name: notary_server
username: postgres
password: moshang
ssl_mode: disable
max_idle_conns: 2
max_open_conns: 0
# 外部 Redis 配置,取消注释以启用
external_redis:
# Redis 数据库的 IP 地址和端口,格式为 IP:PORT
host: 192.168.100.5:6379
# 连接 Redis 数据库时使用的密码
password: moshang
# Registry 数据库在 Redis 中的索引,用于存储 Registry 相关数据
registry_db_index: 1
# Jobservice 数据库在 Redis 中的索引,用于存储作业服务相关数据
jobservice_db_index: 2
# Trivy 数据库在 Redis 中的索引,用于存储 Trivy 扫描结果
trivy_db_index: 5
# Redis 连接的空闲超时时间,单位为秒
idle_timeout_seconds: 30
# 代理配置
proxy:
# HTTP 代理服务器的地址(可选)
http_proxy:
# HTTPS 代理服务器的地址(可选)
https_proxy:
# 不通过代理访问的地址列表,多个地址用逗号分隔
no_proxy:
# 需要通过代理访问的组件列表
components:
# Harbor 的核心组件
- core
# 作业服务组件
- jobservice
# Trivy 安全扫描组件
- trivy
# 上传清理配置
upload_purging:
# 是否启用上传清理功能,默认为 true 表示启用
enabled: true
# 清理上传文件的最大存储时间,超过此时间的文件将被删除,单位为小时
age: 168h
# 清理任务的运行间隔,单位为小时
interval: 24h
# 是否为测试运行,不会实际删除文件
dryrun: false
# 缓存配置
cache:
# 是否启用缓存功能,默认为 false 表示禁用
enabled: false
# 缓存数据的过期时间,单位为小时
expire_hours: 24
(4)运行启动Harbor脚本
[root@harbor1 ~]# cd /opt/harbor/
[root@harbor1 ~]# ./install.sh --with-trivy --with-chartmuseum
查看Docker容器状态
[root@harbor1 harbor]# docker-compose ps
NAME COMMAND SERVICE STATUS PORTS
harbor-core "/harbor/entrypoint.…" core running (healthy)
harbor-jobservice "/harbor/entrypoint.…" jobservice running (healthy)
harbor-log "/bin/sh -c /usr/loc…" log running (healthy) 127.0.0.1:1514->10514/tcp
harbor-portal "nginx -g 'daemon of…" portal running (healthy)
nginx "nginx -g 'daemon of…" proxy running (healthy) 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp, 0.0.0.0:443->8443/tcp, :::443->8443/tcp
registry "/home/harbor/entryp…" registry running (healthy)
registryctl "/home/harbor/start.…" registryctl running (healthy)
(5)登录Harbor私有仓库
接受风险并继续,如图所示:
图2
用户名:admin,密码:Harbor12345,如图所示:
图3
(6)托送镜像到Harbor仓库
拉取Nginx镜像
[root@harbor1 ~]# docker pull nginx:1.20.2
1.20.2: Pulling from library/nginx
214ca5fb9032: Pull complete
50836501937f: Pull complete
d838e0361e8e: Pull complete
fcc7a415e354: Pull complete
dc73b4533047: Pull complete
e8750203e985: Pull complete
Digest: sha256:38f8c1d9613f3f42e7969c3b1dd5c3277e635d4576713e6453c6193e66270a6d
Status: Downloaded newer image for nginx:1.20.2
docker.io/library/nginx:1.20.2
给Nginx镜像打标签
[root@harbor1 ~]# docker tag nginx:1.20.2 harbor.gxl.com/library/nginx:latest
登录Harbor仓库
[root@harbor1 certs]# docker login harbor.gxl.com
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
推送Nginx镜像
[root@harbor1 harbor]# docker push harbor.gxl.com/library/nginx:latest
The push refers to repository [harbor.gxl.com/library/nginx]
07ef16952879: Pushed
881700cb7ab2: Pushed
4f49c6d6dd07: Pushed
a64d597d6b14: Pushed
c2a3d4a53f9a: Pushed
fd95118eade9: Pushed
latest: digest: sha256:a76df3b4f1478766631c794de7ff466aca466f995fd5bb216bb9643a3dd2a6bb size: 1570
在Web页面查看镜像是否存在,如图所示:
图4
(7)配置Windows系统hosts文件
路径如下:C:\Windows\System32\drivers\etc\hosts
添加主机映射保存退出
192.168.100.100 harbor.gxl.com
通过域名的方式访问,如图所示:
图5
(8)配置启动项管理Harbor
[root@harbor1 harbor]# vim /usr/lib/systemd/system/harbor.service
[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor
[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/local/bin/docker-compose -f /opt/harbor/harbor/docker-compose.yml up
ExecStop=/usr/local/bin/docker-compose -f /opt/harbor/harbor/docker-compose.yml down
[Install]
WantedBy=multi-user.target
重新加载服务
[root@harbor1 harbor]# systemctl daemon-reload
[root@harbor1 harbor]# systemctl enable harbor --now
查看服务状态
[root@harbor1 harbor]# systemctl status harbor
● harbor.service - Harbor
Loaded: loaded (/usr/lib/systemd/system/harbor.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2024-11-07 16:51:00 CST; 4s ago
Docs: http://github.com/vmware/harbor
Main PID: 3524 (docker-compose)
Tasks: 7
Memory: 8.3M
CGroup: /system.slice/harbor.service
└─3524 /usr/local/bin/docker-compose -f /opt/harbor/harbor/docker-compose.yml up
Nov 07 16:51:00 master docker-compose[3524]: Container redis Running
Nov 07 16:51:00 master docker-compose[3524]: Container harbor-core Running
Nov 07 16:51:00 master docker-compose[3524]: Container nginx Running
Nov 07 16:51:00 master docker-compose[3524]: Container harbor-jobservice Running
Nov 07 16:51:00 master docker-compose[3524]: Attaching to harbor-core, harbor-db, harbor-jobservice, harbor-log, harbor-...tryctl
Nov 07 16:51:02 master docker-compose[3524]: harbor-portal | 172.18.0.8 - - [07/Nov/2024:08:51:02 +0000] "GET / HTT...t/1.1"
Nov 07 16:51:02 master docker-compose[3524]: registryctl | 172.18.0.8 - - [07/Nov/2024:08:51:02 +0000] "GET /api/... 200 9
Nov 07 16:51:02 master docker-compose[3524]: registry | 172.18.0.8 - - [07/Nov/2024:08:51:02 +0000] "GET / HTT...t/1.1"
Nov 07 16:51:02 master docker-compose[3524]: harbor-jobservice | 2024-11-07T08:51:02Z [INFO] [/jobservice/worker/cworke... stats
Nov 07 16:51:02 master docker-compose[3524]: harbor-jobservice | 2024-11-07T08:51:02Z [INFO] [/jobservice/worker/cworke... stats
Hint: Some lines were ellipsized, use -l to show in full.