k8s环境搭建

本文详细介绍了Kubernetes集群的搭建过程,包括安装脚本、网络配置、环境搭建、使用Snap安装kubectl、部署集群等关键步骤,并提供了源码编译安装运行环境及官方文档指导。

mirantis 有个培训, 提供了安装脚本

培训内容 (student, sublime)

测试网址 (user:  pass: happy knuth)

 

k8s的所有项目:

https://github.com/kubernetes

有源码, 有例子。

安装环境:

http://los-vmm.sc.intel.com/wiki/Start_a_devstack_in_20_minutes

wget -O- http://otcloud-gateway.bj.intel.com/runstack |bash

配置cloud init 可参考  http://www.cnblogs.com/shaohef/p/8137073.html

 

用户搭建

yanglin写了一个脚本安装

https://github.com/shaohef/transcoder-daemon/blob/master/k8s/installk8s.sh

kubectl 使用 snap 安装

$ sudo apt update
$ sudo apt upgrade

Ubuntu使用Snap安装管理软件包

一键脚本

# https://kubernetes.io/docs/tasks/tools/install-kubectl/

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

source <(kubectl completion bash)

# install docker
sudo docker version
if [ $? != 0 ]; then
    wget -O- https://get.docker.com/ |bash
    sudo usermod -aG docker $USER
fi

# https://kubernetes.io/docs/tasks/tools/install-minikube/
# User VM to install kubernetes

# https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#14-installing-kubeadm-on-your-hosts
# https://kubernetes.io/doup/incs/setdependent/install-kubeadm/
sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
# should use tee for sudo user
cat <<EOF |sudo tee -a /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm   # kubectl

# https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
# https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#24-initializing-your-master

sudo kubeadm init --pod-network-cidr=10.244.0.0/16  # flannel

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network
# https://github.com/coreos/flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml

# Waiting for kube-dns ready
sleep 30

# https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#master-isolation
kubectl taint nodes --all node-role.kubernetes.io/master-

# https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#44-joining-your-nodes
curl https://glide.sh/get | sh
 
 

 

通过代理访问API https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#optional-proxying-api-server-to-localhost

各种配置还需要参考: https://kubernetes.io/docs/getting-started-guides/scratch/

yanglin还有一个集群部署的

https://github.com/LinEricYang/kubernetes-vagrant-ansible

vagrant insall 被minikube install 取代  https://kubernetes.io/docs/getting-started-guides/alternatives/

开发者搭建

1. 官方参考文档

https://github.com/kubernetes/community/tree/master/contributors/devel

官方文档 源码编译安装运行环境

下载community指导

git clone https://github.com/kubernetes/community.git
 下载源码:
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes

 https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md

 

安装go

apt 安装

sudo add-apt-repository ppa:gophers/archive
sudo apt update
sudo apt-get install golang-1.9-go

echo "export PATH=\$PATH:/usr/lib/go-1.9/bin" >> ~/.profile
source ~/.profile
 源码安装:

http://jdstaerk.de/installing-go-1-9-on-ubuntu/

https://askubuntu.com/questions/959932/installation-instructions-for-golang-1-9-into-ubuntu-16-04

Download golang 1.9 tar from official site. Then extract it into /usr/local, creating a Go tree in /usr/local/goas follows:

tar -C /usr/local -xzf go$VERSION.$OS-$ARCH.tar.gz

After extracting add the following lines to your $HOME/.profile.

# Set GOROOT
export GOROOT=/usr/local/go
export PATH=$GOROOT/bin:$PATH

 

安装 CFSSL

 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 证书和秘钥文件

Kubernetes安装之证书验证

kubernetes1.6 安装之证书(一)

 

go get -u github.com/cloudflare/cfssl/cmd/...
PATH=$PATH:$GOPATH/bin

 

安装 etcd

https://github.com/kubernetes/community/blob/master/contributors/devel/testing.md#install-etcd-dependency

hack/install-etcd.sh  # Installs in ./third_party/etcd
echo export PATH="\$PATH:$(pwd)/third_party/etcd" >> ~/.profile  # Add to PATH

 

Downloading https://github.com/coreos/etcd/releases/download/v3.1.10/etcd-v3.1.10-linux-amd64.tar.gz succeed
etcd v3.1.10 installed. To use:
export PATH=/home/ubuntu/kubernetes/third_party/etcd:${PATH}

 测试etcd

http://cizixs.com/2016/08/02/intro-to-etcd

 

build (可以省略)

可以使用bazel 也可以直接make

使用 bazel

    需要先安装:

    https://docs.bazel.build/versions/master/install.html

    http://blog.youkuaiyun.com/u010510350/article/details/52247972

    update 过程会报错

         locale: Cannot set LC_ALL to default locale: No such file or directory

         https://askubuntu.com/questions/162391/how-do-i-fix-my-locale-issue

 

 运行k8s 集群  

Getting started locally

cd kubernetes hack/local-up-cluster.sh

修改完代码,重新运行:

cd kubernetes
make
hack/local-up-cluster.sh

output:

~/kubernetes$ ./hack/local-up-cluster.sh                                                                     [113/125]
WARNING : The kubelet is configured to not fail if swap is enabled; production deployments should disable swap.
WARNING : This script MAY be run as root for docker socket / iptables functionality; if failures occur, retry as root.
make: Entering directory '/home/ubuntu/kubernetes'
make[1]: Entering directory '/home/ubuntu/kubernetes'
make[1]: Leaving directory '/home/ubuntu/kubernetes'
+++ [0113 16:02:41] Building the toolchain targets:
    k8s.io/kubernetes/hack/cmd/teststale
    k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0113 16:02:41] Generating bindata:
    test/e2e/generated/gobindata_util.go
~/kubernetes ~/kubernetes/test/e2e/generated
~/kubernetes/test/e2e/generated
+++ [0113 16:02:42] Building go targets for linux/amd64:
    cmd/kubectl
    cmd/hyperkube
+++ [0113 16:02:42] +++ Warning: stdlib pkg with cgo flag not found.
+++ [0113 16:02:42] +++ Warning: stdlib pkg cannot be rebuilt since /usr/lib/go-1.9/pkg is not writable by ubuntu
+++ [0113 16:02:42] +++ Warning: Make /usr/lib/go-1.9/pkg writable for ubuntu for a one-time stdlib install, Or
+++ [0113 16:02:42] +++ Warning: Rebuild stdlib using the command 'CGO_ENABLED=0 go install -a -installsuffix cgo std'
+++ [0113 16:02:42] +++ Falling back to go build, which is slower
    **
make: Leaving directory '/home/ubuntu/kubernetes'
WARNING: No swap limit support
Kubelet cgroup driver defaulted to use: cgroupfs
API SERVER insecure port is free, proceeding...
API SERVER secure port is free, proceeding...
Detected host and ready to start services.  Doing some housekeeping first...
Using GO_OUT /home/ubuntu/kubernetes/_output/local/bin/linux/amd64                                                          [85/125]
Starting services now!
Starting etcd
etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.fc4lxZmyBY --listen-client-urls http://127.0.0.1:2379 --debug
 > "/dev/null" 2>/dev/null
Waiting for etcd to come up.
+++ [0113 16:05:34] On try 2, etcd: : http://127.0.0.1:2379
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
Generating a 2048 bit RSA private key
.................+++
...................................................................+++
writing new private key to '/var/run/kubernetes/server-ca.key'
-----
Generating a 2048 bit RSA private key
..................................+++
...............+++
writing new private key to '/var/run/kubernetes/client-ca.key'
-----
Generating a 2048 bit RSA private key
....+++
.........+++
writing new private key to '/var/run/kubernetes/request-header-ca.key'
-----
2018/01/13 16:05:35 [INFO] generate received request
2018/01/13 16:05:35 [INFO] received CSR
2018/01/13 16:05:35 [INFO] generating key: rsa-2048
2018/01/13 16:05:35 [INFO] encoded CSR
2018/01/13 16:05:35 [INFO] signed certificate with serial number 395359362915036009700347745067760109155190463775
2018/01/13 16:05:35 [INFO] generate received request                                                                        [57/125]
2018/01/13 16:05:35 [INFO] received CSR
2018/01/13 16:05:35 [INFO] generating key: rsa-2048
2018/01/13 16:05:36 [INFO] encoded CSR
2018/01/13 16:05:36 [INFO] signed certificate with serial number 638027877965799213240032145436429922370428427735
2018/01/13 16:05:36 [INFO] generate received request
2018/01/13 16:05:36 [INFO] received CSR
2018/01/13 16:05:36 [INFO] generating key: rsa-2048
2018/01/13 16:05:36 [INFO] encoded CSR
2018/01/13 16:05:36 [INFO] signed certificate with serial number 117385181268793886257733983916540743064208547703
2018/01/13 16:05:37 [INFO] generate received request
2018/01/13 16:05:37 [INFO] received CSR
2018/01/13 16:05:37 [INFO] generating key: rsa-2048
2018/01/13 16:05:37 [INFO] encoded CSR
2018/01/13 16:05:37 [INFO] signed certificate with serial number 297680163083026880492412375349991849607013624957
2018/01/13 16:05:37 [INFO] generate received request
2018/01/13 16:05:37 [INFO] received CSR
2018/01/13 16:05:37 [INFO] generating key: rsa-2048
2018/01/13 16:05:38 [INFO] encoded CSR
2018/01/13 16:05:38 [INFO] signed certificate with serial number 4169806164284151131668930673844789641486821458
2018/01/13 16:05:38 [INFO] generate received request
2018/01/13 16:05:38 [INFO] received CSR
2018/01/13 16:05:38 [INFO] generating key: rsa-2048
2018/01/13 16:05:38 [INFO] encoded CSR
2018/01/13 16:05:38 [INFO] signed certificate with serial number 471459384344562788137659259215808702234424577844
2018/01/13 16:05:38 [INFO] generate received request
2018/01/13 16:05:38 [INFO] received CSR
2018/01/13 16:05:38 [INFO] generating key: rsa-2048
2018/01/13 16:05:38 [INFO] encoded CSR                                                                                      [29/125]
2018/01/13 16:05:38 [INFO] signed certificate with serial number 467074309330042985903205962379274360710934774417
2018/01/13 16:05:38 [INFO] generate received request
2018/01/13 16:05:38 [INFO] received CSR
2018/01/13 16:05:38 [INFO] generating key: rsa-2048
2018/01/13 16:05:39 [INFO] encoded CSR
2018/01/13 16:05:39 [INFO] signed certificate with serial number 32928079188782138358922981009346686107566876453
2018/01/13 16:05:39 [INFO] generate received request
2018/01/13 16:05:39 [INFO] received CSR
2018/01/13 16:05:39 [INFO] generating key: rsa-2048
2018/01/13 16:05:40 [INFO] encoded CSR
2018/01/13 16:05:40 [INFO] signed certificate with serial number 612000465452551089281716851846501143270893248508
Waiting for apiserver to come up
+++ [0113 16:05:51] On try 8, apiserver: : ok
Cluster "local-up-cluster" set.
use 'kubectl --kubeconfig=/var/run/kubernetes/admin-kube-aggregator.kubeconfig' to use the aggregated API server
service "kube-dns" created
serviceaccount "kube-dns" created
configmap "kube-dns" created
deployment "kube-dns" created
Kube-dns addon successfully deployed.
kubelet ( 14035 ) is running.
Create default storage class for
storageclass "standard" created
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.

Logs:
  /tmp/kube-apiserver.log
  /tmp/kube-controller-manager.log                                                                                           [1/125]

  /tmp/kube-proxy.log
  /tmp/kube-scheduler.log
  /tmp/kubelet.log

To start using your cluster, you can open up another terminal/tab and run:

  export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
  cluster/kubectl.sh

Alternatively, you can write to the default kubeconfig:

  export KUBERNETES_PROVIDER=local

  cluster/kubectl.sh config set-cluster local --server=https://localhost:6443 --certificate-authority=/var/run/kubernetes/server-ca.
crt
  cluster/kubectl.sh config set-credentials myself --client-key=/var/run/kubernetes/client-admin.key --client-certificate=/var/run/k
ubernetes/client-admin.crt
  cluster/kubectl.sh config set-context local --cluster=local --user=myself
  cluster/kubectl.sh config use-context local
  cluster/kubectl.sh
./hack/local-up-cluster.sh: line 1018: 13894 Killed                  ${CONTROLPLANE_SUDO} "${GO_OUT}/hyperkube" controller-manager -
-v=${LOG_LEVEL} --vmodule="${LOG_SPEC}" --service-account-private-key-file="${SERVICE_ACCOUNT_KEY}" --root-ca-file="${ROOT_CA_FILE}"
 --cluster-signing-cert-file="${CLUSTER_SIGNING_CERT_FILE}" --cluster-signing-key-file="${CLUSTER_SIGNING_KEY_FILE}" --enable-hostpa
th-provisioner="${ENABLE_HOSTPATH_PROVISIONER}" ${node_cidr_args} --pvclaimbinder-sync-period="${CLAIM_BINDER_SYNC_PERIOD}" --featur
e-gates="${FEATURE_GATES}" ${cloud_config_arg} --kubeconfig "$CERT_DIR"/controller.kubeconfig --use-service-account-credentials --co
ntrollers="${KUBE_CONTROLLERS}" --master="https://${API_HOST}:${API_SECURE_PORT}" > "${CTLRMGR_LOG}" 2>&1
View Code

 

debug

$ go get github.com/derekparker/delve/cmd/dlv
$ ps -ef |grep "hyperkube apiserver"
$ sudo sysctl -w kernel.yama.ptrace_scope=0
$ cat >> ~/.bashrc <<<'
GOROOT=`go env |grep "GOROOT" |cut -d "=" -f2`
GOROOT=${GOROOT#\"}
GOROOT=${GOROOT%\"}
GOPATH=`go env |grep GOPATH |cut -d "=" -f 2`
GOPATH=${GOPATH%\"}
GOPATH=${GOPATH#\"}
export PATH="$PATH:$GOROOT/bin:$GOPATH/bin"'

$ source ~/.bashrc
$ sudo su
# echo 0 > /proc/sys/kernel/yama/ptrace_scope
# exit
$ sudo $GOPATH/bin/dlv attach
$PID

 访问API or access-cluster-api

 k8s-api-tutorial

 kubectl-vs-http-api

 kubernetes-from-the-ground-up-the-api-server

 $ curl http://localhost:8080/api/v1/pods

 $ CERTDIR=/var/run/kubernetes

 $ curl -i https://127.0.0.1:6443/api/v1/pods --cert $CERTDIR/client-admin.crt --key $CERTDIR/client-admin.key --cacert $CERTDIR/server-ca.crt

 

 

转载于:https://www.cnblogs.com/shaohef/p/8043373.html

<think>好的,用户现在问的是如何在Kubernetes(k8s环境搭建并部署Spring Boot服务。我需要先理解用户的需求,然后逐步给出解答。首先,用户可能是一个开发者或者运维人员,想要将自己的Spring Boot应用部署到k8s集群中。他们可能对k8s有一定的了解,但需要具体的步骤指导。 接下来,我需要考虑用户的背景。可能他们已经在本地开发了Spring Boot应用,现在需要将其容器化并部署到k8s集群中。用户可能对Docker和k8s的基础概念有所了解,但需要详细的步骤说明,包括环境搭建、镜像构建、资源配置文件编写以及部署和管理的具体操作。 然后,我需要确保覆盖所有必要的步骤。首先,k8s环境搭建,这里可能需要用户自己准备集群,或者使用Minikube这样的工具在本地测试。然后,将Spring Boot应用打包成Docker镜像,推送至镜像仓库。接着,编写k8sDeployment和Service配置文件,最后部署到集群并进行验证和管理。 同时,要考虑到用户可能遇到的问题,比如如何配置健康检查、如何管理配置和密钥、如何进行滚动更新和回滚等。需要给出最佳实践和建议,帮助用户避免常见错误。 另外,用户可能对持久化存储、监控日志、网络策略等高级话题感兴趣,但作为基础部署流程,可能需要先覆盖基本步骤,再简要提及扩展内容。 最后,要确保使用中文回答,结构清晰,分步骤说明,代码示例正确,并且符合用户之前提到的格式要求,比如行内公式用$...$,独立公式用$$...$$,但在这个问题中可能涉及较少数学公式,更多是代码和命令。需要检查是否有LaTeX语法错误,并确保所有命令和配置示例正确无误。 总结来说,回答需要包括以下几个部分: 1. 环境准备:k8s集群搭建或选择。 2. Spring Boot应用容器化:Dockerfile编写和镜像构建。 3. k8s资源配置文件编写:Deployment、Service等。 4. 部署到集群并验证。 5. 扩展配置和管理操作。 6. 注意事项和最佳实践。 需要确保每一步都有详细的命令和代码示例,并解释每个步骤的作用,帮助用户理解为什么需要这样做。同时,提醒用户注意网络策略、镜像拉取权限、资源限制等常见问题。</think>以下是Kubernetes(k8s环境搭建及部署Spring Boot服务的逐步指南: --- ### 一、环境准备 #### 1. 基础环境选择 - **生产环境**:至少3节点集群(1 Master + 2 Worker) - **开发测试**:单节点方案(任选其一): - **Minikube**(本地模拟集群) - **Kind**(基于Docker的集群) - **Kubeadm**(快速搭建多节点) #### 2. Minikube快速安装(以MacOS为例) ```bash # 安装minikube brew install minikube # 启动集群(默认使用Docker驱动) minikube start --driver=docker --cpus=2 --memory=4096mb # 验证集群状态 kubectl cluster-info ``` --- ### 二、Spring Boot服务容器化 #### 1. 编写Dockerfile ```dockerfile FROM eclipse-temurin:17-jdk-jammy WORKDIR /app COPY target/*.jar app.jar EXPOSE 8080 ENTRYPOINT ["java", "-jar", "app.jar"] ``` #### 2. 构建镜像 ```bash # 编译Spring Boot项目 mvn clean package # 构建Docker镜像 docker build -t springboot-demo:1.0.0 . # 标记镜像(若需推送到仓库) docker tag springboot-demo:1.0.0 registry.example.com/springboot-demo:1.0.0 ``` --- ### 三、Kubernetes资源配置 #### 1. Deployment配置(deployment.yaml) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: springboot-demo spec: replicas: 3 selector: matchLabels: app: springboot-demo template: metadata: labels: app: springboot-demo spec: containers: - name: app image: springboot-demo:1.0.0 ports: - containerPort: 8080 resources: requests: memory: "512Mi" cpu: "250m" limits: memory: "1Gi" cpu: "500m" livenessProbe: httpGet: path: /actuator/health port: 8080 initialDelaySeconds: 30 periodSeconds: 10 ``` #### 2. Service配置(service.yaml) ```yaml apiVersion: v1 kind: Service metadata: name: springboot-service spec: type: NodePort selector: app: springboot-demo ports: - protocol: TCP port: 80 targetPort: 8080 nodePort: 31000 ``` --- ### 四、部署与验证 #### 1. 应用部署 ```bash # 应用资源配置 kubectl apply -f deployment.yaml kubectl apply -f service.yaml # 查看部署状态 kubectl get pods -w kubectl get deployments kubectl get services ``` #### 2. 访问验证 ```bash # 获取服务访问地址 minikube service springboot-service --url # 测试接口 curl http://<NODE_IP>:31000/actuator/health ``` --- ### 五、扩展配置 #### 1. 配置管理 ```yaml # ConfigMap示例 apiVersion: v1 kind: ConfigMap metadata: name: app-config data: application.properties: | server.port=8080 spring.datasource.url=jdbc:mysql://db-service:3306/mydb ``` #### 2. 密钥管理 ```yaml # Secret示例(base64编码) apiVersion: v1 kind: Secret metadata: name: db-secret type: Opaque data: username: YWRtaW4= # admin password: cGFzc3dvcmQ= # password ``` --- ### 六、运维管理 #### 1. 滚动更新 ```bash # 更新镜像版本 kubectl set image deployment/springboot-demo app=springboot-demo:1.0.1 # 查看更新进度 kubectl rollout status deployment/springboot-demo ``` #### 2. 回滚操作 ```bash # 查看历史版本 kubectl rollout history deployment/springboot-demo # 回滚到指定版本 kubectl rollout undo deployment/springboot-demo --to-revision=1 ``` --- ### 七、注意事项 1. **镜像仓库认证**:若使用私有仓库需配置`imagePullSecrets` 2. **资源限制**:合理设置CPU/Memory的requests/limits 3. **网络策略**:通过NetworkPolicy控制服务间通信 4. **日志收集**:建议集成EFK(Elasticsearch+Fluentd+Kibana)栈 5. **监控告警**:推荐搭配Prometheus+Grafana --- 通过上述步骤即可完成从本地开发到Kubernetes集群部署的完整流程。实际生产部署时建议结合CI/CD流水线实现自动化发布。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值