Kubernetes Pods and ReplicaSets: An In - Depth Guide
1. Introduction to Pods
In a Kubernetes cluster, you can’t run containers directly; instead, you run pods. Pods are the atomic units of deployment in Kubernetes. A pod is an abstraction of one or more co - located containers that share the same kernel namespaces, like the network namespace.
For example, Docker stack deploy creates a Deployment for web and db services and a Kubernetes service for the web service so it can be accessed within the cluster. Before proceeding further, you can remove the stack from the cluster using the command:
$ kubectl delete –f '*.yaml'
1.1 Pod Structure and Networking
- Pod Structure : A pod can have one or more containers. Each pod gets a unique IP address in the whole Kubernetes cluster. For instance, if we have two pods, Pod 1 and Pod 2, they might have IP addresses like 10.0.12.3 and 10.0.12.5 respectively. All containers within a pod share the same Linux kernel namespaces, especially the network namespace.
-
Networking
:
-
Docker Container Networking
: When a Docker container is created without specifying a network, Docker Engine creates a virtual ethernet (veth) endpoint. Each container has its own network namespace for isolation. The veth endpoints are connected to the
docker0bridge. -
Kubernetes Pod Networking
: Kubernetes first creates a pause container when creating a new pod. The pause container creates and manages the namespaces that other containers in the pod will share. Other containers in the pod can reuse the network namespace of the pause container using the
--net container:pauseoption.
-
Docker Container Networking
: When a Docker container is created without specifying a network, Docker Engine creates a virtual ethernet (veth) endpoint. Each container has its own network namespace for isolation. The veth endpoints are connected to the
$ docker container create --net container:pause ...
The following table compares Docker container and Kubernetes pod networking:
| Aspect | Docker Container | Kubernetes Pod |
| ---- | ---- | ---- |
| Network Namespace | Each container has its own | Containers in a pod share the same |
| Endpoint Creation | Docker creates a veth endpoint for each container | Only the pause container has a veth endpoint, others reuse it |
1.2 Sharing the Network Namespace
To simulate the creation of a pod:
1. Create a pause container using Nginx:
$ docker container run –detach \
--name pause nginx:alpine
-
Add a second container called
mainand attach it to the same network namespace as the pause container:
$ docker container run --name main \ -d -it \
--net container:pause \
alpine:latest ash
-
Exec into the
maincontainer:
$ docker exec -it main /bin/sh
- Test the connection to Nginx in the pause container:
/ # wget -qO – localhost
-
Check the IP address of
eth0in both containers:
/ # ip a show eth0
- Inspect the bridge network:
$ docker network inspect bridge
- Remove the containers:
$ docker container rm pause main
The mermaid flowchart below shows the process of creating a pod and testing network sharing:
graph LR
A[Create Pause Container] --> B[Create Main Container in Pause's NS]
B --> C[Exec into Main Container]
C --> D[Test Nginx Connection]
D --> E[Check IP Address]
E --> F[Inspect Bridge Network]
F --> G[Remove Containers]
2. Pod Life Cycle
A pod has a life cycle similar to a container but is more complex due to the presence of multiple containers.
-
Pending
: When a pod is created on a cluster node, it first enters the pending status.
-
Running
: Once all containers in the pod are up and running successfully, the pod enters the running status.
-
Succeeded
: If the pod is asked to terminate and all containers terminate with an exit code of zero, the pod enters the succeeded status.
-
Failed
: There are three scenarios for a pod to enter the failed state:
- During startup, if at least one container fails (non - zero exit code).
- When the pod is running, if one of the containers crashes or exits with a non - zero exit code.
- During termination, if at least one container exits with a non - zero exit code.
The following mermaid flowchart shows the pod life cycle:
graph LR
A[Pod Created] --> B[Pending]
B --> C{All Containers Running?}
C -- Yes --> D[Running]
C -- No --> E[Failed]
D --> F{Termination Requested?}
F -- Yes --> G{All Containers Exit 0?}
G -- Yes --> H[Succeeded]
G -- No --> E
3. Pod Specifications
When creating a pod in a Kubernetes cluster, a declarative approach is preferred. Pod specifications can be written in YAML or JSON. Here is a sample YAML specification in a
pod.yaml
file:
apiVersion: v1
kind: Pod
metadata:
name: web - pod
spec:
containers:
- name: web
image: nginx:alpine
ports:
- containerPort: 80
To apply this specification to the cluster:
1. Open a new terminal window and navigate to the relevant subfolder:
$ cd ~/The - Ultimate - Docker - Contianer - Book/ch16
-
Make sure you are using the right context for
kubectl:
$ kubectl config use - context docker - desktop
-
Create a new file called
pod.ymland add the pod specification to it, then save the file. - Create the pod:
$ kubectl create - f pod.yaml
- List all pods in the cluster:
$ kubectl get pods
- Get detailed information about the pod:
$ kubectl describe pod/web - pod
4. Pods and Volumes
Pods can use volumes for accessing and storing persistent data.
1. Create a PersistentVolumeClaim:
- Create a file called
volume - claim.yaml
with the following content:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my - data - claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
- Create the claim:
$ kubectl create - f volume - claim.yaml
- List the claim:
$ kubectl get pvc
-
Use the volume in a pod:
-
Create a file called
pod - with - vol.yamlwith the following content:
-
Create a file called
apiVersion: v1
kind: Pod
metadata:
name: web - pod
spec:
containers:
- name: web
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: my - data
mountPath: /data
volumes:
- name: my - data
persistentVolumeClaim:
claimName: my - data - claim
- Create the pod:
$ kubectl create - f pod - with - vol.yaml
- Exec into the container to check the volume mount:
$ kubectl exec - it web - pod -- /bin/sh
/ # cd /data
/data # echo "Hello world!" > sample.txt
/data # exit
- Delete the pod:
$ kubectl delete pod/web - pod
- Recreate the pod:
$ kubectl create - f pod - with - vol.yaml
- Exec into the container and output the data:
$ kubectl exec - it web - pod -- ash
/ # cat /data/sample.txt
- Exit the container and delete the pod and the persistent volume claim.
Kubernetes Pods and ReplicaSets: An In - Depth Guide
5. Kubernetes ReplicaSets
A single pod is not sufficient in an environment with high - availability requirements. Kubernetes ReplicaSets are used to define and manage a collection of identical pods running on different cluster nodes.
- Function of ReplicaSets : A ReplicaSet ensures that the desired number of pods is running at all times. If a pod crashes, the ReplicaSet schedules a new pod on a node with free resources. If there are more pods than the desired number, it deletes the superfluous pods. This provides a self - healing and scalable set of pods.
The following table summarizes the key features of ReplicaSets:
| Feature | Description |
| ---- | ---- |
| Desired State | Defines container images, number of pod instances, etc. |
| Self - Healing | Schedules new pods if existing ones crash |
| Scalability | Can adjust the number of pods based on requirements |
The mermaid flowchart below shows how a ReplicaSet manages pods:
graph LR
A[ReplicaSet] --> B{Actual Pods = Desired Pods?}
B -- Yes --> C[Maintain Status Quo]
B -- No --> D{Actual < Desired?}
D -- Yes --> E[Spawn New Pod]
D -- No --> F[Delete Extra Pods]
6. ReplicaSet Specification
Kubernetes allows both imperative and declarative approaches to define and create a ReplicaSet. The declarative approach is recommended.
Here is a sample specification for a Kubernetes ReplicaSet in a
replicaset.yaml
file:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs - web
spec:
selector:
matchLabels:
app: web
replicas: 3
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
The key elements of this specification are:
1.
Selector
: Determines which pods are part of the ReplicaSet. In this case, it selects pods with the label
app: web
.
2.
Replicas
: Defines the number of pod instances to run. Here, it is set to 3.
3.
Template
: Defines the metadata and specification of the pods in the ReplicaSet.
To create the ReplicaSet using the declarative approach:
1. Create the
replicaset.yaml
file with the above content.
2. Apply the specification to the cluster:
$ kubectl create -f replicaset.yaml
- List all ReplicaSets in the cluster:
$ kubectl get rs
The output will look like this:
NAME DESIRED CURRENT READY AGE
rs - web 3 3 3 51s
In summary, understanding pods, their life cycle, specifications, and how they interact with volumes is fundamental in Kubernetes. ReplicaSets build on this knowledge to provide high - availability and scalability. By following the steps and examples provided, you can effectively manage pods and ReplicaSets in your Kubernetes cluster.
超级会员免费看

被折叠的 条评论
为什么被折叠?



