目录
- 前言:从业务场景看Harbor部署
- 1、部署说明
- 2、安装Helm
- 2.1 下载helm二进制包
- 2.2 查看helm版本
- 2.3 添加Harbor 官方Helm Chart仓库
- 2.4 下载Chart包到本地
- 3、创建命名空间
- 4、创建NFS外部供应商
- 4.1 部署NFS服务端
- 4.2 安装客户端
- 4.3 创建运行的sa账号并做RBAC授权
- 5、创建存储类(StorageClass)
- 6、修改values.yaml配置
- 7、执行helm install安装Harbor
- 8、服务验证
- 9、登录Harbor UI界面
- 10、那些我踩过的坑
- 10.1 第一个坑、
- 10.2 第二个坑、UI界面提示“用户或者密码不正确“
🐖大家好!我是李大白!
🐂本篇文章主要分享使用Helm在kubernetes集群中部署Harbor详细的操作方法。本文收录于《Harbor大白话(企业级)》,更多Harbor相关的知识可查看我主页。

前言:从业务场景看Harbor部署
我在前面的文章中介绍了离线安装、在线安装等Harbor的部署方式,但其缺点都是无法做高可用,在实际的业务场景中一旦Harbor服务器异常,将会造成很大的影响。
对应前面的几种部署方式,官方也并没有给出高可用的支持方案,如果要支持,则需要对Harbor有一定程度上的了解。
对于Harbor的高可用方案,可将Harbor部署在kubernetes集群中,利用其 特点即可实现Harbor的高可用。
1、部署说明
- 1台NFS服务器(可使用kubernetes集群任一节点做NFS服务端)
- 1个kubernetes集群(1master,2node组成的3节点集群)
- 操作系统: CentOS-7.8
- Harbor版本:2.4.2

2.1 下载helm二进制包
$ wget https://get.helm.sh/helm-v3.7.2-linux-amd64.tar.gz
$ tar zxvf helm-v3.7.2-linux-amd64.tar.gz
$ cd linux-amd64/
$ cp helm /usr/local/bin/
2.2 查看helm版本
$ helm version
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}
2.3 添加Harbor 官方Helm Chart仓库
$ helm repo add harbor https://helm.goharbor.io
"harbor" has been added to your repositories
$ helm repo list # 查看添加的Chart
NAME URL
harbor https://helm.goharbor.io
2.4 下载Chart包到本地
因为需要修改的参数比较多,在命令行直接helm install比较复杂,我就将Chart包下载到本地,再修改一些配置,这样比较直观,也比较符合实际工作中的业务环境。
$ helm search repo harbor # 搜索chart包
NAME CHART VERSION APP VERSION DESCRIPTION
harbor/harbor 1.8.2 2.4.2 An open source trusted cloud native registry th...
$ helm pull harbor/harbor # 下载Chart包
$ tar zxvf harbor-1.8.2.tgz # 解压包

3、创建命名空间
创建harbor命名空间,将Harbor相关的服务都部署在该命名空间中。
$ kubectl create namespace harbor
4、创建NFS外部供应商
本处使用NFS为存储,需要提供外部供应商,如果你有该供应商,可跳过本步骤。
4.1 部署NFS服务端
在NFS服务器(192.168.2.212)部署NFS服务
$ yum install -y nfs-utils
$ systemctl start nfs && systemctl enable nfs
$ systemctl status nfs
$ chkconfig nfs on #设置为开机自启
注意:正在将请求转发到“systemctl enable nfs.service”。
$ mkdir -p /data/nfs/harbor #创建共享目录
$ cat /etc/exports
/data/nfs/harbor 192.168.2.0/24(rw,no_root_squash)
$ exportfs -arv # 使配置文件生效
exporting 192.168.2.0/24:/data/v1
$ systemctl restart nfs
$ showmount -e localhost #检查共享目录信息
Export list for localhost:
/data/nfs/harbor 192.168.2.0/24
4.2 安装客户端
本处客户端即为kubernets集群的每一个节点,若Pod调度到的节点没有该服务,则无法使用对应的存储卷。
$ yum -y install nfs-utils
$ systemctl start nfs-utils && systemctl enable nfs-utils
$ systemctl status nfs-utils
4.3 创建运行的sa账号并做RBAC授权
$ vim nfs-provisioner.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
namespace: harbor
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-cr
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: harbor
roleRef:
kind: ClusterRole
name: nfs-provisioner-cr
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: nfs-role
namespace: harbor
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get","list","watch","create","update","patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner
namespace: harbor
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: harbor
roleRef:
kind: Role
name: nfs-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-proversitioner
namespace: harbor
spec:
selector:
matchLabels:
app: nfs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-provisioner
image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: example.com/nfs
- name: NFS_SERVER
value: 192.168.2.212
- name: NFS_PATH
value: /data/nfs/harbor
volumes:
- name: nfs-client-root
nfs:
server: 192.168.2.212 # NFS服务端地址
path: /data/nfs/harbor # NFS共享目录
🐖:部署Deployment使用的镜像是从阿里云拉取的。
创建资源对象:
$ kubectl apply -f nfs-provisioner.yaml
$ kubectl -n harbor get pod
nfs-proversitioner-7b4c6cc9bf-s48ld 1/1 Running 1 10s
5、创建存储类(StorageClass)
Harbor的database和redis组件是为有状态服务,需要对Harbor数据做持久化存储。
本处基于NFS创建StorageClass存储类,使用NFS外部供应商可阅读我主页文章,NFS服务器和共享目录为:
- - NFS服务器地址:192.168.2.212
- - NFS共享目录:/data/nfs/harbor
$ vim harbor-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: harbor-storageclass
namespace: harbor
provisioner: example.com/nfs # 指定外部存储供应商
$ kubectl apply -f harbor-storageclass.yaml
$ kubectl -n harbor get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
harbor-storageclass example.com/nfs Delete Immediate false 5s
6、修改values.yaml配置
重点!重点!重点! 踩坑最多的地方来了!
$ cd harbor
$ ls
cert Chart.yaml conf LICENSE README.md templates values.yaml
$ vim values.yaml
expose:
type: nodePort # 我这没有Ingress环境,使用NodePort的服务访问方式。
tls:
enabled: false # 关闭tls安全加密认证(如果开启需要配置证书)
...
externalURL: http://192.168.2.11:30002 # 使用nodePort且关闭tls认证,则此处需要修改为http协议和expose.nodePort.ports.http.nodePort指定的端口号,IP即为kubernetes的节点IP地址
# 持久化存储配置部分
persistence:
enabled: true # 开启持久化存储
resourcePolicy: "keep"
persistentVolumeClaim: # 定义Harbor各个组件的PVC持久卷部分
registry: # registry组件(持久卷)配置部分
existingClaim: ""
storageClass: "harbor-storageclass" # 前面创建的StorageClass,其它组件同样配置
subPath: ""
accessMode: ReadWriteMany # 卷的访问模式,需要修改为ReadWriteMany,允许多个组件读写,否则有的组件无法读取其它组件的数据
size: 5Gi
chartmuseum: # chartmuseum组件(持久卷)配置部分
existingClaim: ""
storageClass: "harbor-storageclass"
subPath: ""
accessMode: ReadWriteMany
size: 5Gi
jobservice: # 异步任务组件(持久卷)配置部分
existingClaim: ""
storageClass: "harbor-storageclass" #修改,同上
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
database: # PostgreSQl数据库组件(持久卷)配置部分
existingClaim: ""
storageClass: "harbor-storageclass"
subPath: ""
accessMode: ReadWriteMany
size: 1Gi
redis: # Redis缓存组件(持久卷)配置部分
existingClaim: ""
storageClass: "harbor-storageclass"
subPath: ""
accessMode: ReadWriteMany
size: 1Gi
trivy: # Trity漏洞扫描插件(持久卷)配置部分
existingClaim: ""
storageClass: "harbor-storageclass"
subPath: ""
accessMode: ReadWriteMany
size: 5Gi
...
harborAdminPassword: "Harbor12345" # admin初始密码,不需要修改
...
metrics:
enabled: true # 是否启用监控组件(可以使用Prometheus监控Harbor指标,具体见本专栏文章),非必须操作
core:
path: /metrics
port: 8001
registry:
path: /metrics
port: 8001
jobservice:
path: /metrics
port: 8001
exporter:
path: /metrics
port: 8001
###以下的trace为2.4版本的功能,不需要修改。

扩展:
如果不希望安装最新的版本,可以通过以下命令修改镜像版本号来安装指定的版本。
$ sed -i /tag/s/v2.4.2/v2.3.5/g values.yaml
7、执行helm install安装Harbor
$ helm install harbor . -n harbor # 将安装资源部署到harbor命名空间
NAME: harbor
LAST DEPLOYED: Mon Apr 11 23:32:18 2022
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at http://192.168.2.11:30002
For more details, please visit https://github.com/goharbor/harbor
🐖:-n前有个点,表示当前路径。
8、服务验证
安装完成后,需要验证相关组件的服务是否正常!
$ kubectl -n harbor get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
harbor-chartmuseum-b95bd8d89-z6b6b 1/1 Running 0 3m45s 10.244.242.191 sc-node2 <none> <none>
harbor-core-6dc985545-gtv24 1/1 Running 0 3m45s 10.244.242.132 sc-node2 <none> <none>
harbor-database-0 1/1 Running 0 3m45s 10.244.242.187 sc-node2 <none> <none>
harbor-exporter-6fcfb5f4cd-w2r7w 1/1 Running 0 3m45s 10.244.242.145 sc-node2 <none> <none>
harbor-jobservice-77f9dff7dc-cpf55 1/1 Running 1 3m45s 10.244.242.186 sc-node2 <none> <none>
harbor-nginx-8fc4874cb-lbj5p 1/1 Running 0 3m45s 10.244.242.189 sc-node2 <none> <none>
harbor-notary-server-58c576f48f-lx9s5 1/1 Running 0 3m45s 10.244.242.188 sc-node2 <none> <none>
harbor-notary-signer-c6cc544f5-dnvbl 1/1 Running 0 3m45s 10.244.119.235 sc-node1 <none> <none>
harbor-portal-86bb5f6dd9-f4jhj 1/1 Running 0 3m45s 10.244.242.129 sc-node2 <none> <none>
harbor-redis-0 1/1 Running 0 3m45s 10.244.119.240 sc-node1 <none> <none>
harbor-registry-6c69986589-gks8d 2/2 Running 0 3m45s 10.244.242.185 sc-node2 <none> <none>
harbor-trivy-0 1/1 Running 0 3m45s 10.244.119.233 sc-node1 <none> <none>
nfs-proversitioner-7b4c6cc9bf-s48ld 1/1 Running 1 34h 10.244.242.137 sc-node2 <none> <none>
9、登录Harbor UI界面
查看组件服务
$ kubectl -n harbor get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
harbor NodePort 10.101.62.113 <none> 80:30002/TCP,4443:30004/TCP 5m39s
harbor-chartmuseum ClusterIP 10.104.151.56 <none> 80/TCP 5m39s
harbor-core ClusterIP 10.104.102.203 <none> 80/TCP,8001/TCP 5m39s
harbor-database ClusterIP 10.106.44.67 <none> 5432/TCP 5m39s
harbor-exporter ClusterIP 10.107.133.10 <none> 8001/TCP 5m39s
harbor-jobservice ClusterIP 10.105.117.160 <none> 80/TCP,8001/TCP 5m39s
harbor-notary-server ClusterIP 10.111.124.94 <none> 4443/TCP 5m39s
harbor-notary-signer ClusterIP 10.101.166.147 <none> 7899/TCP 5m39s
harbor-portal ClusterIP 10.99.47.180 <none> 80/TCP 5m39s
harbor-redis ClusterIP 10.106.107.194 <none> 6379/TCP 5m39s
harbor-registry ClusterIP 10.105.154.142 <none> 5000/TCP,8080/TCP,8001/TCP 5m39s
harbor-trivy ClusterIP 10.105.243.164 <none> 8080/TCP 5m39s
使用kubernetes任一节点主机IP和30002端口即可访问UI管理界面。


10、那些我踩过的坑
10.1 第一个坑、
执行`helm install harbor . -n harbor`安装出现的坑
Error: execution error at (harbor/templates/nginx/secret.yaml:3:12):
The "expose.tls.auto.commonName" is required!

报错原因:values.yaml文件中expose.tls.auto.commonName是必填项。
解决:将expose.tls.enabled的值改为false,即不启用tls加密通信,使用tls就需要配置相关证书了,该报错是启用tls但是又没有配置证书,所有报错!
10.2 第二个坑、UI界面提示“用户名或者密码不正确”
天知道我被这个问题困扰了多久,看了github上的issue也有很多人遇到,但是都没有能解决,在1.x版本这是个bug,后来的版本说是修复了,可这是最新版,不应该呀!


issue编号635
原因:
externalURL的值没有修改,使用默认的了。

被折叠的 条评论
为什么被折叠?



