基于二进制的kubernetes集群安装部署

本文详细介绍如何使用二进制文件安装部署Kubernetes集群,包括CentOS YUM源修改、etcd安装配置、主要组件说明,以及Master和Node节点的详细安装过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

基于二进制的kubernetes集群安装部署

前言

本文档只介绍kubernetes的组件安装,含etcd集群。

centos yum源修改

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

etcd 安装 (master,node)

yum install -y etcd

配置

  cat > /etc/etcd/etcd.conf << EOF
  ETCD_NAME=etcd1
  ETCD_DATA_DIR="/var/lib/etcd/etcd1.etcd"
  ETCD_LISTEN_PEER_URLS="http://ip1:2380"
  ETCD_LISTEN_CLIENT_URLS="http://ip1:2379,http://127.0.0.1:2379"
  CD_MAX_SNAPSHOTS="5"
  ETCD_INITIAL_ADVERTISE_PEER_URLS="http://ip1:2380"
  ETCD_INITIAL_CLUSTER="etcd1=http://ip1:2380,etcd2=http://ip2:2380,etcd3=http://ip3:2380" #集群内的ip
  ETCD_ADVERTISE_CLIENT_URLS="http://ip1:2379"
  EOF

主要组件说明

Kubernetes 集群中主要存在两种类型的节点,分别是 master 节点,以及 node 节点。

Master 节点

​ 负责对外提供一系列管理集群的 API 接口,并且通过和node节点交互来实现对集群的操作管理。节点上主要跑了以下服务:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
  • (如果没有单独的etcd集群,则需要部署etcd服务)

node 节点

​ 实际运行Docker容器的节点,负责和节点上运行的 Docker进行交互,并且提供了代理功能。节点上主要跑了以下服务:

  • kubelet
  • kube-proxy
  • flannel
  • docker
  • (如果没有单独的etcd集群,则需要部署etcd服务)

服务说明

kube-apiserver: 用户和kubernetes集群交互的入口,封装了核心对象的增删改查操作,提供了RESTFul 风格的 API 接口,通过 etcd来实现持久化并维护对象的一致性。

kube-scheduler: 负责集群资源的调度和管理,例如当有 pod 异常退出需要重新分配机器时,scheduler 通过一定的调度算法从而找到最合适的节点。

kube-controller-manager: 主要是用于保证 replicationController 定义的复制数量和实际运行的 pod 数量一致,另外还保证了从 service 到 pod 的映射关系总是最新的。

kubelet: 运行在 node 节点,负责和节点上的 Docker 交互,例如启停容器,监控运行状态等。

kube-proxy: 运行在 node 节点,负责为 pod 提供代理功能,会定期从 etcd 获取 service 信息,并根据 service 信息通过修改 iptables 来实现流量转发(最初的版本是直接通过程序提供转发功能,效率较低。),将流量转发到要访问的 pod 所在的节点上去。

etcd: key-value键值存储数据库,用来存储kubernetes的信息的。

flannel: Flannel 是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,需要另外下载部署。我们知道当我们启动 Docker 后会有一个用于和容器进行交互的 IP 地址,如果不去管理的话可能这个 IP 地址在各个机器上是一样的,并且仅限于在本机上进行通信,无法访问到其他机器上的 Docker 容器。Flannel 的目的就是为集群中的所有节点重新规划 IP 地址的使用规则,从而使得不同节点上的容器能够获得同属一个内网且不重复的 IP 地址,并让属于不同节点上的容器能够直接通过内网 IP 通信

安装过程

下载二进制文件

去官方网站 https://kubernetes.io/docs/setup/release/notes/ 下载Server Binaries 该包内包含client node 和server所需要的所有二进制文件。本文档选v1.13.0版本
执行以下命令,得到所有的二进制文件

cd
wget https://dl.k8s.io/v1.13.0/kubernetes-server-linux-amd64.tar.gz
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
ls -l

master安装

将所需二进制文件拷贝到指定指定路径
cp ~/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler} /usr/bin/
生成ca文件
openssl genrsa -out ca.key 2048 
openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.pem -subj "/CN=kubernetes/O=k8s"

创建openssl.cnf文件

cat > openssl.cnf << EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = 192.169.0.1 #cluster-ip-range首ip
IP.2 = 10.0.0.100  #apiserver地址
IP.3 = 192.169.0.53 #kubernetes dns地址
EOF

生成apiserver证书对

openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -out apiserver.csr -subj "/CN=kubernetes/O=k8s" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out apiserver.pem -days 3650 -extensions v3_req -extfile openssl.cnf

将生成的文件放到/etc/kubernetes/ssl/ 目录下

添加kube-apiserver服务
cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver  \\
        \$KUBE_ETCD_SERVERS \\
        \$KUBE_API_ADDRESS \\
        \$KUBE_API_PORT \\
        \$KUBE_SERVICE_ADDRESSES \\
        \$KUBE_ADMISSION_CONTROL \\
        \$KUBE_API_LOG \\
        \$KUBE_API_ARGS 
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
mkdir -p /etc/kubernetes
cat > /etc/kubernetes/apiserver << EOF
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--insecure-port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd-ip:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.169.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_LOG="--logtostderr=false --log-dir=/var/log/kubernets/apiserver --v=2"
KUBE_API_ARGS="--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca.key"
EOF
添加kube-controller-manager服务
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service 
Requires=kube-apiserver.service

[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \\
        \$KUBE_MASTER \\ 
        \$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
cat > /etc/kubernetes/controller-manager << EOF
KUBE_MASTER="--master=http://master-ip:8080"
KUBE_CONTROLLER_MANAGER_ARGS="--cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca.key --service-account-private-key-file=/etc/kubernetes/ssl/ca.key --root-ca-file=/etc/kubernetes/ssl/ca.pem"
EOF
添加kube-scheduler服务
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service 
Requires=kube-apiserver.service

[Service]
User=root
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \\
        \$KUBE_MASTER \\
        \$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
cat > /etc/kubernetes/scheduler << EOF
KUBE_MASTER="--master=http://master-ip:8080"
KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/var/log/kubernetes/scheduler --v=2"
EOF

注意将以上master-ip和etcd-ip替换为实际ip

启动master上服务
systemctl daemon-reload 
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
检查master状态
kubectl get cs

正常情况输出入下

NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
scheduler            Healthy   ok

node安装

安装docker
yum install docker -y
安装flannel服务
yum install -y flannel
cat > /etc/sysconfig/flanneld << EOF
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"

# 配置前缀 flannel根据这个前缀去etcd寻找相应配置
FLANNEL_ETCD_PREFIX="/abc.cn/network"
EOF

在etcd服务启动后 往etcd集群中插入条数据 根据flanneld配置 key名为 /abc.cn/network/config

etcdctl mk /abc.cn/network/config '{"Network": "192.168.0.0/16"}'

启动docker

systemctl start docker

将docker0网卡删除

ip link del docker0
启动flanneld服务
systemctl start flanneld
重启docker服务
systemctl  restart docker
将master上下载的kube-proxy和kubelet拷贝至/usr/bin目录
安装kube-proxy服务
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
 
[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \\
            \$KUBE_LOGTOSTDERR \\
            \$KUBE_LOG_LEVEL \\
            \$KUBE_MASTER \\
            \$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
EOF
mkdir -p /etc/kubernetes
cat > /etc/kubernetes/proxy << EOF
KUBE_PROXY_ARGS=""
EOF
cat > /etc/kubernetes/config << EOF
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://master-ip:8080"
EOF
安装kubelet服务
cat > /usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \$KUBELET_ARGS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF
cat > /etc/kubernetes/kubelet << EOF
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=node-ip"
KUBELET_API_SERVER="--api-servers=http://master-ip:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=docker.io/tianyebj/pod-infrastructure:latest"
KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true --fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubeconfig --cgroup-driver=systemd"
EOF
mkdir -p /var/lib/kubelet
cat > /var/lib/kubelet/kubeconfig << EOF
apiVersion: v1
kind: Config
users:
- name: kubelet
clusters:
- name: kubernetes
  cluster:
    server: http://master-ip:8080
contexts:
- context:
    cluster: kubernetes
    user: kubelet
  name: service-account-context
current-context: service-account-context
EOF

注意将以上master-ip和node-ip替换为实际ip

启动node上服务
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl enable kubelet
systemctl start kubelet
其他事项
  • 由于pod启动时会去下载指定镜像,网络问题导致失败,执行以下命令处理
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
  • 由于一些默认参数问题,可能导致k8s内部及外部访问出问题,在所有节点上执行以下命令解决
ip route add 192.169.0.0/16 dev docker0 #路由为service-cluster-ip
iptables -P FORWARD ACCEPT

验证

在主机上执行

kubectl get node

正常输出入下

NAME         STATUS                     ROLES    AGE     VERSION
k8s-node-1   Ready                      <none>   3h43m   v1.13.0

安装coredns

为了让kubernetes内服务能互相访问,官方自1.11版本以后引入了coredns作为集群dns插件。之前为kube-dns。
安装插件后,集群内pod可以根据service名称获得service的cluster-ip 从而达到服务dns解析的目的。
去github上下载所需文件。https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns
coredns.yaml.base文件
将__PILLAR__DNS__SERVER__替换为dns服务的cluster-ip
将__PILLAR__DNS__DOMAIN__替换为cluster.local
将image地址修改为国内地址,否则被墙无法下载。
修改后如下:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      containers:
      - name: coredns
        image: coredns/coredns:1.3.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 192.169.0.53
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

启动coredns服务

kubectl  create -f coredns.yaml

为了使集群中dns生效,需要修改每个节点上kubelet配置,使其访问改dns服务。
在/etc/kubernetes/kubelet文件中KUBELET_ARGS参数增加以下配置

--cluster_dns=192.169.0.53 --cluster_domain=cluster.local

然后重启kubelet服务

systemctl  restart kubelet

通过两个简单的yaml文件检查k8s是否正常工作

nginx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-rc
  labels:
    name: nginx-rc-labels
spec:
  replicas: 2
  selector:
    name: nginx-rc
  template:
    metadata:
      labels:
        name: nginx-rc
    spec:
      containers:
      - image: nginx
        name: nginx

nginx-service.yaml

[root@k8s-test-master nginx]# cat nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  ports:
  - name: test
    port: 80
    targetPort: 80
    nodePort: 30000
  selector:
    name: nginx-rc
kubectl create -f nginx-rc.yaml      #创建指定数量pods
kubectl create -f nginx-service.yaml #选择刚才创建的pods构成一个服务

通过kubectl get svc 查看服务CLUSTER-IP 在主机上可通过curl CLUSTER-IP:80访问创建的nginx集群,也可以通过curl 节点ip:30000 访问

如果访问失败 通过kubectl describe rc nginx-rc 和kubectl describe svc nginx-service查看是否有报错。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值