ubuntu 22.04 下 Kubernetes管理 4块4090 GPU显卡

kubernetes如何调度GPU资源?

1.安装显卡驱动

NVIDIA-Linux-x86_64-535.113.01.run* cuda 12.2

国内下载地址,速度快,替换驱动版本

# wget https://cn.download.nvidia.com/XFree86/Linux-x86_64/525.113.01/NVIDIA-Linux-x86_64-525.113.01.run
# sh NVIDIA-Linux-x86_64-535.113.01.run 

安装完reboot 重启

开启内存持久化

(base) ubuntu@ubuntu:~$ nvidia-smi -pm 1
Unable to set persistence mode for GPU 00000000:17:00.0: Insufficient Permissions
Terminating early due to previous errors.

nvidia-smi 查看显卡

(base) ubuntu@ubuntu:~$  nvidia-smi -L
GPU 0: NVIDIA GeForce RTX 4090 (UUID: GPU-88717b49-0372-9d05-e6ca-238870f93bf3)
GPU 1: NVIDIA GeForce RTX 4090 (UUID: GPU-74b01939-bc8b-833b-10ac-daa5c60fc594)
GPU 2: NVIDIA GeForce RTX 4090 (UUID: GPU-0715eb37-44d8-d7ca-cd20-79452c93fe86)
GPU 3: NVIDIA GeForce RTX 4090 (UUID: GPU-b9f5ac04-9684-71fe-88b6-6363e7c2936d)
(base) ubuntu@ubuntu:~$ 

2.安装docker

配置apt源

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

安装docker

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

重启docker

systemctl restart docker

3.安装nvidia-docker-toolkit

安装Apt

配置存储库:

#curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \

&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \

sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \

sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list \

&& \

sudo apt-get update

安装NVIDIA容器工具包:

#sudo apt-get install -y nvidia-container-toolkit

测试安装

#docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi

4.安装kubernetes kubeadm

由于服务器上已经安装了docker ,所有我们不用containerd

基础环境配置

1.设置主机名字,具有明显的标识性

hostnamectl set-hostname ubuntu

2.禁用SELinux

sudo setenforce 0

sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

3.关闭swap分区

swapoff -a #临时关闭

sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭

sed -ri 's/#(.*swap.*)/\1/' /etc/fstab #开启swap分区

4.把IPv6的流量桥接到IPv4网卡上,通信更方便,统计更准确

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

br_netfilter

EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

5.应用配置

sysctl --system

6. 安装Kubernetes组件

0.docker 已经安装好,不再安装 ,查看docker的版本和配置信息

(base) ubuntu@ubuntu:~/k8s$ sudo docker info

Client: Docker Engine - Community

Version: 24.0.6

Context: default

Debug Mode: false

Plugins:

buildx: Docker Buildx (Docker Inc.)

Version: v0.11.2

Path: /usr/libexec/docker/cli-plugins/docker-buildx

compose: Docker Compose (Docker Inc.)

Version: v2.21.0

Path: /usr/libexec/docker/cli-plugins/docker-compose

(base) ubuntu@ubuntu:~/k8s$

(base) ubuntu@ubuntu:~/k8s$ cat /etc/docker/daemon.json

{

"exec-opts":["native.cgroupdriver=systemd"],

"data-root": "/data2/dockerdata",

"runtimes": {

"nvidia": {

"args": [],

"path": "nvidia-container-runtime"

}

}

}

(base) ubuntu@ubuntu:~/k8s$

1.Kubernetes 添加 apt 存储库

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

sudo apt-add-repository "deb kubernetes-apt安装包下载_开源镜像站-阿里云 kubernetes-xenial main"

2.安装kubelet,kubectl,kubeadm

sudo apt update

sudo apt install -y kubelet=1.23.8-00 kubeadm=1.23.8-00 kubectl=1.23.8-00

sudo apt-mark hold kubelet kubeadm kubectl

这里指定了版本是为了将其版本保持一致,以便于后面安装dashboard,由于是用docker安装k8s,而在k8s v1.24之后的版本不再支持docker,所以安装v1.23.8,如果想安装最新版本或者指定版本,把后面的版本号去掉或者修改即可

最后一行命令是为了防止其自动更新导致版本不匹配

#解除锁定

apt-mark unhold package_name

3.设置kubelet开机自启

sudo systemctl enable --now kubelet

4.master域名映射

echo "172.16.1.220 cluster-endpoint" >> /etc/hosts # 把x替换成你的服务器/虚拟机的内网ip

5.kubeadm init初始化

sudo kubeadm init \

--apiserver-advertise-address=172.16.1.220 \

--control-plane-endpoint=cluster-endpoint \

--image-repository registry.aliyuncs.com/google_containers \

--kubernetes-version v1.23.8 \

--service-cidr=10.96.0.0/16 \

--pod-network-cidr=10.244.0.0/16

在使用docker安装k8s的时候,有一个很重要的小细节,就是docker默认使用的Cgroup Driver是cgroupfs,安装报错,那么就需要使用systemd作为cgroup

解决方法:

vim /etc/docker/daemon.json

添加以下内容

{

"exec-opts":["native.cgroupdriver=systemd"]

}

#应用配置并重启docker

systemctl daemon-reload

systemctl restart docker

此时,重新使用kubeadm初始化就没问题了

在初始化之前还要重置以前的初始化

kubeadm reset

rm -rf /etc/kubernetes/manifests/kube-apiserver.yaml

rm -rf /etc/kubernetes/manifests/kube-controller-manager.yaml

rm -rf /etc/kubernetes/manifests/kube-scheduler.yaml

rm -rf /etc/kubernetes/manifests/etcd.yaml

rm -rf /var/lib/etcd/*

(base) ubuntu@ubuntu:~$ sudo systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent

Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)

Drop-In: /etc/systemd/system/kubelet.service.d

└─10-kubeadm.conf

Active: active (running) since Wed 2024-04-03 07:40:35 UTC; 2min 54s ago

Docs: Kubernetes Documentation | Kubernetes

Main PID: 90566 (kubelet)

Tasks: 47 (limit: 618620)

Memory: 98.3M

CGroup: /system.slice/kubelet.service

└─90566 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=>

(base) ubuntu@ubuntu:~$ sudo docker images |grep google

registry.aliyuncs.com/google_containers/kube-apiserver v1.23.8 09d62ad3189b 21 months ago 135MB

registry.aliyuncs.com/google_containers/kube-proxy v1.23.8 db4da8720bcb 21 months ago 112MB

registry.aliyuncs.com/google_containers/kube-scheduler v1.23.8 afd180ec7435 21 months ago 53.5MB

registry.aliyuncs.com/google_containers/kube-controller-manager v1.23.8 2b7c5a039984 21 months ago 125MB

registry.aliyuncs.com/google_containers/etcd 3.5.1-0 25f8c7f3da61 2 years ago 293MB

registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7 2 years ago 46.8MB

registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12 2 years ago 683kB

(base) ubuntu@ubuntu:~$

令牌是节点加入的指令,24h有效,可以用以下指令生成

kubeadm token create --print-join-command

6.根据提示继续 ----以 ubuntu 用户执行

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

7.安装网络组件

curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O

接着将calico.yaml中的3888和3889行修改为如图所示的样子,因为前面node节点的ip配置是这样的

vi calico.yaml

3888 # - name: CALICO_IPV4POOL_CIDR

3889 # value: "192.168.0.0/16"

----》

3888 - name: CALICO_IPV4POOL_CIDR

3889 value: "10.244.0.0/16"

应用yaml文件

(base) ubuntu@ubuntu:~/k8s$ kubectl apply -f calico.yaml

(base) ubuntu@ubuntu:~/k8s$

8.查看master节点状态

kubectl get nodes ------等一会才能查看到状态 网络配置完成过一会即可看到Ready状态

(base) ubuntu@ubuntu:~/k8s$ kubectl get nodes

NAME STATUS ROLES AGE VERSION

ubuntu Ready control-plane,master 14m v1.23.8

至此,k8s集群搭建完毕,如有多个node节点可以使用令牌加入master节点中

(base) ubuntu@ubuntu:~/k8s$ kubectl get no -o yaml | grep taint -A 5

taints:

- effect: NoSchedule

key: node-role.kubernetes.io/master

status:

addresses:

- address: 172.16.1.220

(base) ubuntu@ubuntu:~/k8s$

#去除所有的污点

(base) ubuntu@ubuntu:~/k8s$ kubectl taint nodes --all node-role.kubernetes.io/master-

node/ubuntu untainted

#再次查看,如果没有任何输出则污点去除成功

(base) ubuntu@ubuntu:~/k8s$ kubectl get no -o yaml | grep taint -A 5

(base) ubuntu@ubuntu:~/k8s$

#查看pod节点是否成功启动,所有节点都是running

(base) ubuntu@ubuntu:~/k8s$ kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system calico-kube-controllers-5b9cd88b65-gtjvn 1/1 Running 0 10m

kube-system calico-node-nc746 1/1 Running 0 10m

kube-system coredns-6d8c4cb4d-85t62 1/1 Running 0 24m

kube-system coredns-6d8c4cb4d-cwd92 1/1 Running 0 24m

kube-system etcd-ubuntu 1/1 Running 1 24m

kube-system kube-apiserver-ubuntu 1/1 Running 1 24m

kube-system kube-controller-manager-ubuntu 1/1 Running 1 24m

kube-system kube-proxy-st4qb 1/1 Running 0 24m

kube-system kube-scheduler-ubuntu 1/1 Running 1 24m

(base) ubuntu@ubuntu:~/k8s$

默认安装 kubeadm 证书有效期 一年

(base) ubuntu@ubuntu:~$ sudo kubeadm certs check-expiration

[check-expiration] Reading configuration from the cluster...

[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

W0403 14:24:19.986291 791926 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf

5.安装设备插件 device plugin

部署设备插件的首选方法是使用helm作为守护进程。 安装helm的说明可以在 这里.

https://github.com/helm/helm/tags

下载helm二进制包

#tar -zxvf helm-v3.10.2-linux-amd64.tar.gz

#mv linux-amd64/helm /usr/local/bin/helm

开始,设置插件的helm存储库,并更新如下:

$ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin

$ helm repo update

然后验证插件的最新版本(v0.14.3)是否可用:

(base) ubuntu@ubuntu:~/k8s$ helm search repo nvdp --devel

NAME CHART VERSION APP VERSION DESCRIPTION

nvdp/gpu-feature-discovery 0.15.0-rc.2 0.15.0-rc.2 A Helm chart for gpu-feature-discovery on Kuber...

nvdp/nvidia-device-plugin 0.15.0-rc.2 0.15.0-rc.2 A Helm chart for the nvidia-device-plugin on Ku...

(base) ubuntu@ubuntu:~/k8s$

Deploy the device plugin:

#helm install --generate-name nvdp/nvidia-device-plugin --namespace nvidia-device-plugin \

--create-namespace

下载helm chat 包

helm pull nvdp/nvidia-device-plugin

安装nvidia-device-plugin插件一直起不来,查看日志发现抱错

(base) ubuntu@ubuntu:~$ kubectl logs nvidia-device-plugin-1712138777-wxdc8 -n nvidia-device-plugin

安装抱错 Detected non-NVML platform: could not load NVML library: libnvidia-ml.so.1: cannot open shared object file: No such file or directory

需要修改 daemon.json 文件 ,然后重启docker

(base) ubuntu@ubuntu:~$ more /etc/docker/daemon.json

{

"exec-opts":["native.cgroupdriver=systemd"],

"data-root": "/data2/dockerdata",

"default-runtime": "nvidia",

"runtimes": {

"nvidia": {

"runtimeArgs": [],

"path": "/usr/bin/nvidia-container-runtime"

}

}

}

(base) ubuntu@ubuntu:~$ sudo systemctl restart docker

6.安装gpu-feature-discovery

$ helm repo add nvgfd https://nvidia.github.io/gpu-feature-discovery

$ helm repo update

$ helm search repo nvgfd --devel

# helm install --generate-name nvgfd/gpu-feature-discovery --namespace gpu-feature-discovery \

--create-namespace

(base) ubuntu@ubuntu:~$ helm list -A

NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION

gpu-feature-discovery-1712148385 gpu-feature-discovery 1 2024-04-03 12:47:56.759025068 +0000 UTC deployed gpu-feature-discovery-0.8.2 0.8.2

nvidia-device-plugin-1712138777 nvidia-device-plugin 1 2024-04-03 10:08:01.06389181 +0000 UTC deployed nvidia-device-plugin-0.14.5 0.14.5

(base) ubuntu@ubuntu:~$

镜像下载不下来 ,下载helm chart 重新安装

(base) ubuntu@ubuntu:~/k8s/gpu-feature-discovery$ helm uninstall gpu-feature-discovery-1712148385 -n gpu-feature-discovery

release "gpu-feature-discovery-1712148385" uninstalled

(base) ubuntu@ubuntu:~/k8s/gpu-feature-discovery$

#helm pull nvgfd/gpu-feature-discovery

#docker pull yansenchangyu/node-feature-discovery:v0.13.1

#docker pull nvcr.io/nvidia/gpu-feature-discovery:v0.8.2

修改 node-feature-discovery 下载路径,原来的镜像下载不下来 修改 registry.k8s.io/nfd/node-feature-discovery:v0.13.2 --》yansenchangyu/node-feature-discovery:v0.13.1

#helm install gpu-feature-discovery . --create-namespace --namespace gpu-feature-discovery

Helm 安装gpu-feature-discovery ,NFD 会自动安装

7.测试集群和GPU集成

cat <<EOF | kubectl apply -f -

apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  restartPolicy: Never
  containers:
    - name: cuda-container
      image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
      resources:
        limits:
          nvidia.com/gpu: 1 # requesting 1 GPU
  nodeSelector:
    accelerator: nvidia-rtx4090

EOF

观察运行日志,看到 Test PASSED 表示容器使用gpu计算运行完成。

(base) ubuntu@ubuntu:~/k8s$ kubectl logs gpu-pod

[Vector addition of 50000 elements]

Copy input data from the host memory to the CUDA device

CUDA kernel launch with 196 blocks of 256 threads

Copy output data from the CUDA device to the host memory

Test PASSED

Done

(base) ubuntu@ubuntu:~/k8s$

完整的视频演示请移步B站:老吴聊技术 

评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值