资源: 对象
下面介绍一下在k8s上常用的资源: 注:很少单独使用pod,一般都是用pod控制器控制pod,因此各种各样的控制器都属于workload类型
1.工作负载型资源:workload :Pod,ReplicasSet,Deployment,StatefulSet,DaemonSet,Job,Cronjob.....
2.服务发现及均衡:Service, Ingress,....
3.配置与存储:Volume,CSI 云存储 ConfigMap ,Secret DownwardAPI
4.集群级资源: Namespace, Node,Role, ClusterRole RoleBinding, ClusterRoleBinding
5.元数据型资源: HPA PodTemplate LimitRange
包括单不仅限于这些个资源间进行使用,那么在创建资源的使用除了使用kubectl,还可以使用其他的配置方式
一般称为配置清单来创建: 举个例子:
[root@server1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 2h
myapp-848b5b879b-lj66h 1/1 Running 0 1h
myapp-848b5b879b-tbnjb 1/1 Running 0 1h
myapp-848b5b879b-tl78s 1/1 Running 0 1h
nginx-deploy-5b595999-stvlq 1/1 Running 0 2h
那么在get pod 得到容器的内部的详细信息 使用 yaml文件输出 ,输出出来的结果是我们定义容器的格式,由很多属性和字段组成
[root@server1 ~]# kubectl get pod myapp-848b5b879b-lj66h -o yaml
apiVersion: v1 对用的对象属于k8s的哪一个版本,或者api的组的版本 group/version,core
kind: Pod 资源类别
metadata: 元数据,这是一个嵌套的字段
creationTimestamp: 2019-05-03T12:48:51Z
generateName: myapp-848b5b879b-
labels:
pod-template-hash: "4046164356"
run: myapp
name: myapp-848b5b879b-lj66h
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: myapp-848b5b879b
uid: f0b9c68c-6d9d-11e9-a366-525400d49963 唯一标识符,系统自动生成,不用自定义
resourceVersion: "18686"
selfLink: /api/v1/namespaces/default/pods/myapp-848b5b879b-lj66h 自引用:再api中
uid: cd4fcc8c-6da1-11e9-a366-525400d49963
spec: 规格 ,定义创建的资源对象用改具有什么样的特性,如:每个容器使用什么镜像,
containers:
- image: ikubernetes/myapp:v1
imagePullPolicy: IfNotPresent
name: myapp
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-zwfzp
readOnly: true
dnsPolicy: ClusterFirst
nodeName: server2
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-zwfzp
secret:
defaultMode: 420
secretName: default-token-zwfzp
status: 非常重要的嵌套式字段,space 让用户定义目标资源期望的状态,而status当前的资源的当前的状态,当目标状态与期望状态不一致,应该以期望状态为准,k8s就是让当前的状态无限的向期望的状态靠近的,由此来看status 是只读的,由系统来维护,space 则是由用户定义的,
conditions:
- lastProbeTime: null
lastTransitionTime: 2019-05-03T12:48:51Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2019-05-03T12:48:54Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: null
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: 2019-05-03T12:48:51Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://ba3a6efff1888984117742db9841c7fa4bf4f8693d5da6f19ff79dfeed116cb8
image: ikubernetes/myapp:v1
imageID: docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
lastState: {}
name: myapp
ready: true
restartCount: 0
state:
running:
startedAt: 2019-05-03T12:48:53Z
hostIP: 172.25.254.2
phase: Running
podIP: 10.244.1.10
qosClass: BestEffort
startTime: 2019-05-03T12:48:51Z
创建资源的方法:
apiserver仅接收JSON格式的资源定义;
yaml格式提供配置清单,apiserver可自动将其转为json格式,而后再提交
注:因为JSON格式的文件太重量级,不利于编写
大部分资源的配置清单:
apiversion指明当前的资源属于哪个群组,k8s把api-version的版本分组管理
分组更新的好处:当其中的某一部分改变的时候,不用改变全部,并存
apiversion, group/version
查看支持的版本 $kubectl api-version 有三个版本,最好使用稳定版
kind, 资源类别,用来标记创建的资源的类型。 k8s 支持的资源,不能自建,有一定的格式
metadata, 元数据
name name受限于namespace
naespace
lables 标签
annotations 注解
每个资源的引用PATH :
/api/GROUP/VERSION/namespace/NAMESPACE/TYPE/NAME
spec: 注:不同的资源其spec字段的值应该各不相同,定义期望状态
status: 当前状态, current stat ,本字段由kubernetes集群维护;
那么spec的每个字段都各不相同,我们怎么去定义spec这个字段; 使用 ~]$ kubectl explain pods
查看一个资源的二级字段
[root@server1 ~]# kubectl explain pods.metadata
写一个简单的基于yaml格式的资源清单:
[root@server1 ~]# mkdir mainfests
[root@server1 ~]# cd mainfests/
[root@server1 mainfests]#
[root@server1 mainfests]# ls
[root@server1 mainfests]# vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
- name: busybox
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "echo $(date) >> /usr/share/nginx/html/index.html; sleep 5"
[root@server1 mainfests]# kubectl create -f pod-demo.yaml
pod/pod-demo created
[root@server1 mainfests]# kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 Error 0 17h
myapp-848b5b879b-lj66h 1/1 Running 1 15h
myapp-848b5b879b-tbnjb 1/1 Running 1 15h
myapp-848b5b879b-tl78s 1/1 Running 1 15h
nginx-deploy-5b595999-stvlq 1/1 Running 1 16h
pod-demo 0/2 ContainerCreating 0 8s
[root@server1 mainfests]# kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 Error 0 17h
myapp-848b5b879b-lj66h 1/1 Running 1 15h
myapp-848b5b879b-tbnjb 1/1 Running 1 15h
myapp-848b5b879b-tl78s 1/1 Running 1 15h
nginx-deploy-5b595999-stvlq 1/1 Running 1 16h
pod-demo 2/2 Running 1 29s
注意再查看详细的信息的时候,先指明资源的类型, pods 再指明资源的名称
[root@server1 mainfests]# kubectl describe pods pod-demo
.........
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned default/pod-demo to server3
Normal Pulled 2m kubelet, server3 Container image "ikubernetes/myapp:v1" already present on machine
Normal Created 2m kubelet, server3 Created container
Normal Started 2m kubelet, server3 Started container
Normal Pulling 1m (x4 over 2m) kubelet, server3 pulling image "busybox:latest"
Normal Pulled 52s (x4 over 2m) kubelet, server3 Successfully pulled image "busybox:latest"
Normal Created 52s (x4 over 2m) kubelet, server3 Created container
Normal Started 52s (x4 over 2m) kubelet, server3 Started container
Warning BackOff 16s (x6 over 1m) kubelet, server3 Back-off restarting failed container
........
那么对容器进行排错的时候要用到logs的日志的信息,因此我们要学会使用kubectl logs
[root@server1 mainfests]# kubectl logs pod-demo myapp
[root@server1 mainfests]# curl 10.244.2.16
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 mainfests]#
[root@server1 mainfests]# kubectl logs pod-demo myapp
10.244.0.0 - - [04/May/2019:04:58:55 +0000] "GET / HTTP/1.1" 200 65 "-" "curl/7.29.0" "-"
[root@server1 mainfests]# kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 Error 0 17h
myapp-848b5b879b-lj66h 1/1 Running 1 16h
myapp-848b5b879b-tbnjb 1/1 Running 1 16h
myapp-848b5b879b-tl78s 1/1 Running 1 16h
nginx-deploy-5b595999-stvlq 1/1 Running 1 17h
pod-demo 1/2 CrashLoopBackOff 7 15m 显示当前的容器内部由一个容器已经挂掉了
那么显示当前已经挂掉的容器的详细的日志信息:
[root@server1 mainfests]# kubectl logs pod-demo busybox
/bin/sh: can't create /usr/share/nginx/html/index.html: nonexistent directory
[root@server1 mainfests]# kubectl get pods -w -w 表示监控
NAME READY STATUS RESTARTS AGE
client 0/1 Error 0 17h
myapp-848b5b879b-lj66h 1/1 Running 1 16h
myapp-848b5b879b-tbnjb 1/1 Running 1 16h
myapp-848b5b879b-tl78s 1/1 Running 1 16h
nginx-deploy-5b595999-stvlq 1/1 Running 1 17h
pod-demo 0/2 ContainerCreating 0 16s
[root@server1 ~]# kubectl exec -it pod-demo -c myapp -- /bin/sh 使用exec -it来交互pod对象内部的容器 , pod-demo 指明 pod 的名称 ,使用-c 指明pod内部的哪个容器 , -- /bin/sh 交互使用的脚本的绝对路径
/ # ls
bin etc lib mnt root sbin sys usr
dev home media proc run srv tmp var
/ # ls /usr/share/nginx/html/
50x.html index.html
那么之所以不成功的原因是在同一个pod内部的两个container中,container之间是相互隔离的,采用namespace 机制相互隔开,那么busybox是无法访问myapp的名称空间下的数据,那么必须在同一个pod 内部指明挂载存储卷以后,地址的空间才能共享。
[root@server1 mainfests]# kubectl delete -f pod-demo.yaml 删除清单中所定义的资源,
pod "pod-demo" deleted