IP地址规划
server | ip |
---|---|
k8s-masker | 192.168.1.200 |
k8s-node1 | 192.168.1.201 |
k8s-node2 | 192.168.1.202 |
prometheus | 192.168.1.230 |
nfs | 192.168.1.231 |
ansible | 192.168.1.232 |
harbor | 192.168.1.233 |
大致框架
安装 jenkins
yum install git -y
...
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : 1:perl-Error-0.17020-2.el7.noarch 1/5
正在安装 : rsync-3.1.2-12.el7_9.x86_64 2/5
正在安装 : perl-TermReadKey-2.30-20.el7.x86_64 3/5
正在安装 : perl-Git-1.8.3.1-25.el7_9.noarch 4/5
正在安装 : git-1.8.3.1-25.el7_9.x86_64 5/5
验证中 : perl-TermReadKey-2.30-20.el7.x86_64 1/5
验证中 : 1:perl-Error-0.17020-2.el7.noarch 2/5
验证中 : git-1.8.3.1-25.el7_9.x86_64 3/5
验证中 : perl-Git-1.8.3.1-25.el7_9.noarch 4/5
验证中 : rsync-3.1.2-12.el7_9.x86_64 5/5已安装:
git.x86_64 0:1.8.3.1-25.el7_9作为依赖被安装:
perl-Error.noarch 1:0.17020-2.el7 perl-Git.noarch 0:1.8.3.1-25.el7_9 perl-TermReadKey.x86_64 0:2.30-20.el7 rsync.x86_64 0:3.1.2-12.el7_9完毕!
2.下载相关yaml文件
进入官网下载压缩包文件,然后导入虚拟机
https://github.com/scriptcamp/kubernetes-jenkins
3.解压
[root@master ~]# unzip kubernetes-jenkins-main.zip
Archive: kubernetes-jenkins-main.zip
0c3fba187adbc96c78d9c1dc60e11cdd176ca45b
creating: kubernetes-jenkins-main/
inflating: kubernetes-jenkins-main/README.md
inflating: kubernetes-jenkins-main/deployment.yaml
extracting: kubernetes-jenkins-main/namespace.yaml
inflating: kubernetes-jenkins-main/service.yaml
inflating: kubernetes-jenkins-main/serviceAccount.yaml
inflating: kubernetes-jenkins-main/volume.yaml
[root@master kubernetes-jenkins-main]# ls
deployment.yaml namespace.yaml README.md serviceAccount.yaml service.yaml volume.yaml5.运行namespace.yaml文件
[root@master kubernetes-jenkins-main]# cat namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: devops-tools
[root@master kubernetes-jenkins-main]# kubectl apply -f namespace.yaml
namespace/devops-tools created
6.运行serviceAccount.yaml
[root@master kubernetes-jenkins-main]# cat serviceAccount.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jenkins-admin
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins-admin
namespace: devops-tools---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jenkins-admin
subjects:
- kind: ServiceAccount
name: jenkins-admin
namespace: devops-tools[root@master kubernetes-jenkins-main]# kubectl apply -f serviceAccount.yaml
clusterrole.rbac.authorization.k8s.io/jenkins-admin created
serviceaccount/jenkins-admin created
clusterrolebinding.rbac.authorization.k8s.io/jenkins-admin created7.创建volume卷,使用pv和pvc,注意文件里面需要将PersistentVolume部分的最后一行改为自己的node节点名称
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-volume
labels:
type: local
spec:
storageClassName: local-storage
claimRef:
name: jenkins-pv-claim
namespace: devops-tools
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
local:
path: /mnt
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1 #更改为node节点的名字[root@master kubernetes-jenkins-main]# kubectl apply -f volume.yaml
storageclass.storage.k8s.io/local-storage created
persistentvolume/jenkins-pv-volume created
persistentvolumeclaim/jenkins-pv-claim created
8.运行deployment.yaml,部署jekins
kubectl apply -f deployment.yaml[root@master kubernetes-jenkins-main]# kubectl get pod -n devops-tools
NAME READY STATUS RESTARTS AGE
jenkins-85fcfbb869-48zdk 1/1 Running 0 2m52s
[root@master kubernetes-jenkins-main]# kubectl get deploy -n devops-tools
NAME READY UP-TO-DATE AVAILABLE AGE
jenkins 1/1 1 1 2m56s
9.启动服务,发布jenkins的pod
[root@master kubernetes-jenkins-main]# kubectl apply -f service.yaml
service/jenkins-service created
[root@master kubernetes-jenkins-main]# kubectl get svc -n devops-tools
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-service NodePort 10.98.200.107 <none> 8080:32000/TCP 12s10.进入pod,获取登录密码
[root@master kubernetes-jenkins-main]# kubectl exec -it jenkins-85fcfbb869-48zdk -n devops-tools -- bash
jenkins@jenkins-85fcfbb869-48zdk:/$ cat /var/jenkins_home/secrets/initialAdminPassword
ffa24fbfbd3e44e293e0cfd9412bc92c
安装 JumpServer
#需要2核4G的虚拟机
curl -sSL https://resource.fit2cloud.com/jumpserver/jumpserver/releases/latest/download/quick_start.sh | bash
登录
用户名和密码都是admin
初次登录需要更改密码
效果图
设置当前站点url为本机地址,而不是回环地址
添加用户组
ansible部署
[root@ansible ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:YGv9ScKv4RTUxlIrhGpD3tVttUkUm2yjT8aA28sEK6M root@ansible
The key's randomart image is:
+---[RSA 2048]----+
| ..... o=. |
| . ...+.ooo = |
| o oo.+ B.. O |
| =..* + = = . |
| . .o S + + + |
| . . O + = |
| E o + o . |
| o o |
| o |
+----[SHA256]-----+
[root@ansible ~]#
[root@ansible ~]# cd /root/.ssh/
[root@ansible .ssh]# ls
id_rsa id_rsa.pub
# 部署免密通道
[root@ansible .ssh]# ssh-copy-id -i id_rsa.pub root@192.168.1.230
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_rsa.pub"
The authenticity of host '192.168.1.230 (192.168.1.230)' can't be established.
ECDSA key fingerprint is SHA256:GhEQWCholuMPMqDpZvuk5UpFFhgy8N3NV+45MdJwWu4.
ECDSA key fingerprint is MD5:b2:d2:40:7b:77:39:b5:4e:fa:e7:1e:eb:17:d1:8e:6b.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.1.230's password:Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@192.168.1.230'"
and check to make sure that only the key(s) you wanted were added.[root@ansible .ssh]# ssh root@192.168.1.230
Last login: Tue Sep 12 21:31:34 2023 from 192.168.1.110
[root@prometheus ~]# ls
anaconda-ks.cfg
[root@prometheus ~]# exit
登出
Connection to 192.168.1.230 closed.
安装ansible
[root@ansible .ssh]# yum install epel-release -y
[root@ansible .ssh]# yum install ansible -y
编写主机清单
[root@ansible .ssh]# cd /etc/ansible/
[root@ansible ansible]# ls
ansible.cfg hosts roles
[root@ansible ansible]# vim hosts
[k8smabster]
192.168.1.200[k8snode]
192.168.1.201
192.168.1.202[nfs]
192.168.1.231[harbor]
192.168.1.233[prometheus]
192.168.1.230
部署nfs服务器
1.为整个web集群提供数据,让所有的web业务pod都去访问,通过pv、pvc和卷挂载实现
# 在nfs服务器和k8s集群上安装nfs
[root@nfs ~]# yum install nfs-utils -y
[root@master ~]# yum install nfs-utils -y
[root@node1 ~]# yum install nfs-utils -y
[root@node2 ~]# yum install nfs-utils -y
2.设置共享目录
[root@nfs ~]# vim /etc/exports
[root@nfs ~]# cat /etc/exports
/web 192.168.1.0/24(rw,no_root_squash,sync)
[root@nfs ~]# mkdir /web
[root@nfs ~]# cd /web
[root@nfs web]# echo "have a nice day" >index.html
[root@nfs web]# ls
index.html
[root@localhost web]# exportfs -rv #刷新nfs服务
exporting 192.168.78.0/24:/web
#重启服务并且设置开机启动
[root@nfs web]# systemctl restart nfs && systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
3.创建pv使用nfs服务器上的共享目录
[root@master ~]# mkdir /pv
[root@master ~]# cd /pv/
[root@master pv]# vim nfs-pv.ymlapiVersion: v1
kind: PersistentVolume
metadata:
name: pv-web
labels:
type: pv-web
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
storageClassName: nfs # pv对应的名字
nfs:
path: "/web" # nfs共享的目录
server: 192.168.2.121 # nfs服务器的ip地址
readOnly: false # 访问模式
[root@master pv]# kubectl apply -f nfs-pv.yml
persistentvolume/pv-web created
[root@master pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-web 10Gi RWX Retain Available nfs 12s# 创建pvc使用pv
[root@master pv]# vim nfs-pvc.ymlapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-web
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: nfs #使用nfs类型的pv
[root@master pv]# kubectl apply -f nfs-pvc.yml
persistentvolumeclaim/pvc-web created
[root@master pv]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-web Bound pv-web 10Gi RWX nfs 13s
#创建pod使用pvc
[root@master pv]# vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: sc-pv-storage-nfs
persistentVolumeClaim:
claimName: pvc-web
containers:
- name: sc-pv-container-nfs
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: sc-pv-storage-nfs
[root@master pv]# kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
[root@master pv]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-76855d4d79-mbbf7 1/1 Running 0 13s 10.244.166.131 node1 <none> <none>
nginx-deployment-76855d4d79-qgvth 1/1 Running 0 13s 10.244.104.4 node2 <none> <none>
nginx-deployment-76855d4d79-xkgz7 1/1 Running 0 13s 10.244.166.132 node1 <none> <none>
4.测试访问
[root@master pv]# curl 10.244.166.131
have a nice day
[root@master pv]# curl 10.244.166.132
have a nice day
[root@master pv]# curl 10.244.104.4
have a nice day
# 修改nfs服务器挂载内容
[root@nfs web]# echo "hello" >> index.html
#再次访问
[root@master pv]# curl 10.244.104.4
have a nice day
hello
[root@master pv]# curl 10.244.166.132
have a nice day
hello
[root@master pv]# curl 10.244.166.131
have a nice day
hello
k8s部署mysql pod
1.编写yaml文件,包括了deployment、service
[root@master xm]# cat mysql_deploy_svc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mysql
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7.42
name: mysql
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: "xzx527416"
ports:
- containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
labels:
app: svc-mysql
name: svc-mysql
spec:
selector:
app: mysql
type: NodePort
ports:
- port: 3306
protocol: TCP
targetPort: 3306
nodePort: 300072.部署
[root@master xm]# kubectl apply -f mysql_deploy_svc.yaml
deployment.apps/mysql created
service/svc-mysql unchanged
[root@master xm]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h
svc-mysql NodePort 10.99.226.10 <none> 3306:30007/TCP 47s
[root@master xm]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-7964c6f547-v7m8d 1/1 Running 0 2m36s
[root@master xm]# kubectl exec -it mysql-7964c6f547-v7m8d -- bash
bash-4.2# mysql -uroot -pxzx527416
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.42 MySQL Community Server (GPL)Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
部署nginx,采用HPA技术
首先配置daemon.json,让docker到自己的harbor仓库拉mysql镜像 (每个节点都要做)
vim /etc/docker/daemon.json{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries": ["192.168.1.233:80"]
}
# 重启docker服务
systemctl daemon-reload && systemctl restart docker
登录到harbor (默认)账户:admin 密码:Harbor12345
[root@master docker]# docker login 192.168.1.233:80
Username: admin
Password:
Error response from daemon: Get "http://192.168.1.233:80/v2/": unauthorized: authentication required
[root@master docker]# docker login 192.168.1.233:80
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded
从harbor仓库拉nginx镜像 (node节点都拉一下)
[root@node1 ~]# docker pull 192.168.1.233:80/library/nginx:1.0
1.0: Pulling from library/nginx
360eba32fa65: Extracting [===========================================> ] 25.07MB/29.12MB
c5903f3678a7: Downloading [==========================================> ] 34.74MB/41.34MB
27e923fb52d3: Download complete
72de7d1ce3a4: Download complete
94f34d60e454: Download complete
e42dcfe1730b: Download complete
907d1bb4e931: Download complete
安装metrics
下载配置文件wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# 替换image
image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0
imagePullPolicy: IfNotPresent
args:
# // 新增下面两行参数
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname部署:
kubectl apply -f components.yaml[root@master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6949477b58-tbkl8 1/1 Running 1 7h10m
calico-node-4t8kx 1/1 Running 1 7h10m
calico-node-6lbdw 1/1 Running 1 7h10m
calico-node-p6ghl 1/1 Running 1 7h10m
coredns-7f89b7bc75-dxc9v 1/1 Running 1 7h15m
coredns-7f89b7bc75-kw7ph 1/1 Running 1 7h15m
etcd-master 1/1 Running 1 7h15m
kube-apiserver-master 1/1 Running 2 7h15m
kube-controller-manager-master 1/1 Running 1 7h15m
kube-proxy-87ptg 1/1 Running 1 7h15m
kube-proxy-8gbsd 1/1 Running 1 7h15m
kube-proxy-x4fbj 1/1 Running 1 7h15m
kube-scheduler-master 1/1 Running 1 7h15m
metrics-server-7787b94d94-jt9sc 1/1 Running 0 47s
[root@master ~]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master 151m 7% 1221Mi 64%
node1 85m 8% 574Mi 65%
node2 193m 19% 573Mi 65%使用hpa:
[root@master ~]# mkdir hpa
[root@master ~]# cd hpa
[root@master hpa]# cat myweb.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myweb
name: myweb
spec:
replicas: 3
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: 192.168.1.233:80/library/nginx:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
resources:
limits:
cpu: 300m
requests:
cpu: 100m
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myweb-svc
name: myweb-svc
spec:
selector:
app: myweb
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 30001部署:kubectl apply -f myweb.yaml
使用hpa功能
[root@master hpa]# kubectl autoscale deployment myweb --cpu-percent=50 --min=1 --max=10
horizontalpodautoscaler.autoscaling/myweb autoscale[root@master hpa]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myweb Deployment/myweb 0%/50% 1 10 3 40s
访问
这里写了一个go来多协程访问,看看pod和hpa的变化
```go
package mainimport (
"fmt"
"net/http"
"sync"
)func main() {
pool := make(chan struct{}, 500)
var wg sync.WaitGroup
//在协程中创建10000个协程
for i := 0; i < 10000; i++ {
pool <- struct{}{}
wg.Add(1)
go func() {
defer func() {
<-pool
wg.Done()
}()
resp, err := http.Get("http://192.168.1.202:30001/")
if err != nil {
fmt.Println(err)
return
}
defer resp.Body.Close()
fmt.Println(resp.StatusCode)
}()
}
wg.Wait()
//等待所有协程执行完毕
}
[root@master hpa]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myweb Deployment/myweb 25%/50% 1 10 1 14m
使用ingress给web做负载均衡
[root@master ingress]# ls
ingress-controller-deploy.yaml ingress-nginx-controllerv1.1.0.tar.gz kube-webhook-certgen-v1.1.0.tar.gz my-ingress.yaml my-nginx-svc.yaml介绍:
ingress-controller-deploy.yaml 是部署ingress controller使用的yaml文件
ingress-nginx-controllerv1.1.0.tar.gz ingress-nginx-controller镜像
kube-webhook-certgen-v1.1.0.tar.gz kube-webhook-certgen镜像
my-ingress.yaml 创建ingress的配置文件
my-nginx-svc.yaml 启动sc-nginx-svc-1服务和相关pod的yaml
将压缩包传到node1和node2节点
[root@master ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz root@node1:/root
ingress-nginx-controllerv1.1.0.tar.gz 100% 276MB 81.6MB/s 00:03
[root@master ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz root@node2:/root
ingress-nginx-controllerv1.1.0.tar.gz 100% 276MB 81.4MB/s 00:03
[root@master ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz root@node2:/root
kube-webhook-certgen-v1.1.0.tar.gz 100% 47MB 100.7MB/s 00:00
[root@master ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz root@node1:/root
kube-webhook-certgen-v1.1.0.tar.gz 100% 47MB 120.5MB/s 00:00在node节点导入
docker load -i ingress-nginx-controllerv1.1.0.tar.gz
docker load -i kube-webhook-certgen-v1.1.0.tar.gz
部署
kubectl apply -f
[root@master ~]# kubectl get ns
NAME STATUS AGE
default Active 5m33s
ingress-nginx Active 65s
kube-node-lease Active 5m34s
kube-public Active 5m34s
kube-system Active 5m34s创建svc,暴露服务
[root@master ingress]# cat my-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
annotations:
kubernets.io/ingress.class: nginx
spec:
ingressClassName: nginx
rules:
- host: www.wen.com
http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: sc-nginx-svc-3
port:
number: 80
- path: /bar
pathType: Prefix
backend:
service:
name: sc-nginx-svc-4
port:
number: 80
kubectl apply -f my-nginx-svc.yaml
[root@master ingress]# kubectl describe svc sc-nginx-svc
Name: sc-nginx-svc-4
Namespace: default
Labels: app=sc-nginx-svc-4
Annotations: <none>
Selector: app=sc-nginx-feng-4
Type: ClusterIP
IP Families: <none>
IP: 10.104.90.254
IPs: 10.104.90.254
Port: name-of-service-port 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.104.2:80,10.244.104.3:80,10.244.166.130:80
Session Affinity: None
Events: <none>[root@master ingress]# curl 10.104.90.254
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
使用Dashboard管理集群
1.首先从官网获取yaml文件
[root@master dashboard]# ls
recommended.yaml
您在 /var/spool/mail/root 中有新邮件
2.部署
[root@master dashboard]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created可以看到这里多了一个namespace,叫kubernetes-dashboard
[root@master dashboard]# kubectl get ns
NAME STATUS AGE
default Active 3h43m
ingress-nginx Active 3h39m
kube-node-lease Active 3h43m
kube-public Active 3h43m
kube-system Active 3h43m
kubernetes-dashboard Active 7s3.查看pod和svc信息
[root@master dashboard]# kubectl get pod -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-66dd8bdd86-s27cx 0/1 ContainerCreating 0 34s
kubernetes-dashboard-785c75749d-hbn6f 0/1 ContainerCreating 0 35s
[root@master dashboard]# kubectl get pod -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-66dd8bdd86-s27cx 0/1 ContainerCreating 0 42s
kubernetes-dashboard-785c75749d-hbn6f 1/1 Running 0 43s
您在 /var/spool/mail/root 中有新邮件[root@master dashboard]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.96.10.205 <none> 8000/TCP 51s
kubernetes-dashboard ClusterIP 10.108.234.121 <none> 443/TCP 51s#由于kubernetes-dashboard服务是ClusterIP类型的,不便于浏览器访问,所以自己该为NodePort类型
先删除svc
[root@master dashboard]# kubectl delete svc kubernetes-dashboard -n kubernetes-dashboard
service "kubernetes-dashboard" deleted
自己编写的svc的yaml文件
[root@master dashboard]# vim dashboard-svc.yaml
您在 /var/spool/mail/root 中有新邮件
[root@master dashboard]# cat dashboard-svc.yaml
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
4.部署
[root@master dashboard]# kubectl apply -f dashboard-svc.yaml
service/kubernetes-dashboard created
[root@master dashboard]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.96.10.205 <none> 8000/TCP 3m23s
kubernetes-dashboard NodePort 10.104.39.223 <none> 443:30389/TCP 16s5.创建一个登录dashboard的用户
[root@master dashboard]# vim dashboard-svc-account.yaml
[root@master dashboard]# cat dashboard-svc-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dashboard-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
[root@master dashboard]# kubectl apply -f dashboard-svc-account.yaml
serviceaccount/dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
您在 /var/spool/mail/root 中有新邮件
6.查看secret信息
[root@master dashboard]# kubectl get secret -n kube-system|grep admin|awk '{print $1}'
dashboard-admin-token-d65vh
secret的describe中可以看到登录使用的token。
[root@master dashboard]# kubectl describe secret dashboard-admin-token-d65vh -n kube-system
Name: dashboard-admin-token-d65vh
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 6e65a1b3-0669-47b7-a8d4-a16b0cacc069Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IloyczRwd1g0WGFxMXdfdG5TUVBRRy1sUW5mT0FEcEpYMWwwdC1EYnBHT1kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZDY1dmgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNmU2NWExYjMtMDY2OS00N2I3LWE4ZDQtYTE2YjBjYWNjMDY5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.2homyNjWI18vJA81aNoyQ0cQkNhsRxHk-4PFeWrkqtX-DSidbg68nNEyEFWf2b3lswdJ33szLM51ulDr5qp8cmpBlPUCw8Wcl-5k2sY3eZoaMJDFdWARdbs20xmxA73wYNcHNhttkncrmuDXKuJs39j_Nff17kHJYCj9wOKAwfezvwDQEqOb7u7riUle2w54aELornD4AGemDGivdBR5AWOguSoLl3RTZ74cPycG_-IP-pggSNGCYc4LCnfkfMZdx6LFBh0Dzz10blWUSCUNFGXzD1rkG-TVvcug4infG8BYmGtYgl55_xAH_LMjGz9gSQdMnFOdC_hL27e9lONajg
您在 /var/spool/mail/root 中有新邮件
[root@master dashboard]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.96.10.205 <none> 8000/TCP 9m17s
kubernetes-dashboard NodePort 10.104.39.223 <none> 443:30389/TCP 6m10s
您在 /var/spool/mail/root 中有新邮件
[root@master dashboard]# kubectl get pod -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-66dd8bdd86-s27cx 1/1 Running 0 9m32s
kubernetes-dashboard-785c75749d-hbn6f 1/1 Running 0 9m33s
[root@master dashboard]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.96.10.205 <none> 8000/TCP 9m42s
kubernetes-dashboard NodePort 10.104.39.223 <none> 443:30389/TCP 6m35s
浏览器登录,使用的端口是30389
这里一定要使用https访问,并且打开浏览器高级选项就可以看到登录面板了
搭建Promethus+grafana监控
1.安装prometheus server
#上传下载的源码包到linux服务器
[root@prometheus ~]# mkdir /prom
[root@prometheus ~]# cd /prom
[root@prometheus prom]# ls
prometheus-2.34.0.linux-amd64.tar.gz
#解压源码包
[root@prometheus prom]# tar xf prometheus-2.34.0.linux-amd64.tar.gz
[root@prometheus prom]# ls
prometheus-2.34.0.linux-amd64 prometheus-2.34.0.linux-amd64.tar.gz
[root@prometheus prom]# mv prometheus-2.34.0.linux-amd64 prometheus
[root@prometheus prom]# ls
prometheus prometheus-2.34.0.linux-amd64.tar.gz
#临时和永久修改PATH变量,添加prometheus的路径
[root@prometheus prometheus]# PATH=/prom/prometheus:$PATH #临时
[root@prometheus prometheus]# cat /root/.bashrc
PATH=/prom/prometheus:$PATH #添加
#执行prometheus程序
[root@prometheus prometheus]# nohup prometheus --config.file=/prom/prometheus/prometheus.yml &
[1] 8431
[root@prometheus prometheus]# nohup: 忽略输入并把输出追加到"nohup.out"
2.把prometheus做成一个服务来进行管理
[root@prometheus prometheus]# vim /usr/lib/systemd/system/prometheus.service
[Unit]
Description=prometheus
[Service]
ExecStart=/prom/prometheus/prometheus --config.file=/prom/prometheus/prometheus.yml
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
[Install]:
WantedBy=multi-user.target
#重新加载systemd相关的服务
[root@prometheus prometheus]# systemctl daemon-reload
[root@prometheus prometheus]# service prometheus start
[root@prometheus system]# ps aux|grep prometheu
root 7193 2.0 4.4 782084 44752 ? Ssl 13:16 0:00 /prom/prometheus/prometheus --config.file=/prom/prometheus/prometheus.yml
root 7201 0.0 0.0 112824 972 pts/1 S+ 13:16 0:00 grep --color=auto prometheu
3.执行Prometheus程序
[root@prometheus prometheus]# nohup prometheus --config.file=/prom/prometheus/prometheus.yml &
[1] 1543
[root@prometheus prometheus]# nohup: 忽略输入并把输出追加到"nohup.out"[root@prometheus prometheus]#
[root@prometheus prometheus]# service prometheus restart
Redirecting to /bin/systemctl restart prometheus.service
[root@prometheus prometheus]# netstat -anplut|grep prome
tcp6 0 0 :::9090 :::* LISTEN 1543/prometheus
tcp6 0 0 ::1:9090 ::1:42776 ESTABLISHED 1543/prometheus
tcp6 0 0 ::1:42776 ::1:9090 ESTABLISHED 1543/prometheus
4.在node节点服务器上安装exporter程序
下载node_exporter-1.4.0-rc.0.linux-amd64.tar.gz源码,上传到节点服务器上:
wget https://github.com/prometheus/node_exporter/releases/download/v1.4.0/node_exporter-1.4.0.linux-amd64.tar.gz
解压,单独存放到/node_exporter文件夹:
1.下载node_exporter-1.4.0-rc.0.linux-amd64.tar.gz源码,上传到节点服务器上
2.解压
[root@mysql-master ~]# ls
anaconda-ks.cfg node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
[root@mysql-master ~]# tar xf node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
[root@mysql-master ~]# ls
node_exporter-1.4.0-rc.0.linux-amd64
node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
单独存放到/node_exporter文件夹
[root@mysql-master ~]# mv node_exporter-1.4.0-rc.0.linux-amd64 /node_exporter
[root@mysql-master ~]# cd /node_exporter/
[root@mysql-master node_exporter]# ls
LICENSE node_exporter NOTICE
[root@mysql-master node_exporter]#
# 修改PATH变量
[root@mysql-master node_exporter]# PATH=/node_exporter/:$PATH
[root@mysql-master node_exporter]# vim /root/.bashrc
[root@mysql-master node_exporter]# tail -1 /root/.bashrc
PATH=/node_exporter/:$PATH
# 执行node exporter 代理程序agent
[root@mysql-master node_exporter]# nohup node_exporter --web.listen-address 0.0.0.0:8090 &
[root@mysql-master node_exporter]# ps aux | grep node_exporter
root 64281 0.0 2.1 717952 21868 pts/0 Sl 19:03 0:04 node_exporter --web.listen-address 0.0.0.0:8090
root 82787 0.0 0.0 112824 984 pts/0 S+ 20:46 0:00 grep --color=auto node_exporter
[root@mysql-master node_exporter]# netstat -anplut | grep 8090
tcp6 0 0 :::8090 :::* LISTEN 64281/node_exporter
tcp6 0 0 192.168.17.152:8090 192.168.17.156:43576 ESTABLISHED 64281/node_exporter
5.在prometheus server里添加exporter程序
[root@prometheus prometheus]# vim prometheus.yml
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.static_configs:
- targets: ["localhost:9090"]- job_name: "master"
static_configs:
- targets: ["192.168.1.200:8090"]
- job_name: "node1"
static_configs:
- targets: ["192.168.1.201:8090"]
- job_name: "node2"
static_configs:
- targets: ["192.168.1.203:8090"]
- job_name: "harbor"
static_configs:
- targets: ["192.168.1.233:8090"]
- job_name: "nfs"
static_configs:
- targets: ["192.168.1.231:8090"][root@prometheus prometheus]# service prometheus restart
Redirecting to /bin/systemctl restart prometheus.service