【Docker】docker pull image speed up

当Docker无法从国内源拉取镜像时,可以通过配置HTTP代理来解决。在文中,用户通过编辑/etc/systemd/system/docker.service.d/proxy.conf文件,设置HTTP_PROXY、HTTPS_PROXY和NO_PROXY环境变量,然后使用systemctl命令重启docker服务,以实现通过本地代理服务器(127.0.0.1:7890)拉取镜像。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

小🐱 = 小写p开头的英文
大🐱 = 大写p开头的英文
🐱端口 = 七八九零

注:系统层面的🐱无法影响到docker pull image。

>>> sudo mkdir -p /etc/systemd/system/docker.service.d
>>> sudo vim /etc/systemd/system/docker.service.d/小🐱.conf
[Service]
Environment="HTTP_大🐱=http://127.0.0.1:🐱端口/"
Environment="HTTPS_大🐱=http://127.0.0.1:🐱端口/"
Environment="NO_大🐱=localhost,127.0.0.1,.example.com"
>>> sudo systemctl daemon-reload
>>> sudo systemctl restart docker

参考:https://www.cnblogs.com/azureology/p/15708853.html

root@k8snode01-49 ~]# sudo kubeadm init --config=kubeadm-config.yaml [init] Using Kubernetes version: v1.25.0 [preflight] Running pre-flight checks [WARNING Hostname]: hostname "node" could not be reached [WARNING Hostname]: hostname "node": lookup node on 192.168.58.2:53: no such host [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image registry.k8s.io/kube-apiserver:v1.25.0: output: E0625 10:49:37.036283 7096 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/kube-apiserver:v1.25.0\": failed to resolve reference \"registry.k8s.io/kube-apiserver:v1.25.0\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/kube-apiserver/manifests/v1.25.0\": dial tcp 74.125.199.82:443: connect: connection refused" image="registry.k8s.io/kube-apiserver:v1.25.0" time="2025-06-25T10:49:37+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/kube-apiserver:v1.25.0\": failed to resolve reference \"registry.k8s.io/kube-apiserver:v1.25.0\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/kube-apiserver/manifests/v1.25.0\": dial tcp 74.125.199.82:443: connect: connection refused" , error: exit status 1 [ERROR ImagePull]: failed to pull image registry.k8s.io/kube-controller-manager:v1.25.0: output: E0625 10:51:34.160244 7175 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/kube-controller-manager:v1.25.0\": failed to resolve reference \"registry.k8s.io/kube-controller-manager:v1.25.0\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/kube-controller-manager/manifests/v1.25.0\": dial tcp 74.125.199.82:443: connect: connection refused" image="registry.k8s.io/kube-controller-manager:v1.25.0" time="2025-06-25T10:51:34+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/kube-controller-manager:v1.25.0\": failed to resolve reference \"registry.k8s.io/kube-controller-manager:v1.25.0\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/kube-controller-manager/manifests/v1.25.0\": dial tcp 74.125.199.82:443: connect: connection refused" , error: exit status 1 [ERROR ImagePull]: failed to pull image registry.k8s.io/kube-scheduler:v1.25.0: output: E0625 10:54:05.266959 7219 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/kube-scheduler:v1.25.0\": failed to resolve reference \"registry.k8s.io/kube-scheduler:v1.25.0\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/kube-scheduler/manifests/v1.25.0\": dial tcp 74.125.199.82:443: connect: connection refused" image="registry.k8s.io/kube-scheduler:v1.25.0" time="2025-06-25T10:54:05+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/kube-scheduler:v1.25.0\": failed to resolve reference \"registry.k8s.io/kube-scheduler:v1.25.0\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/kube-scheduler/manifests/v1.25.0\": dial tcp 74.125.199.82:443: connect: connection refused" , error: exit status 1 [ERROR ImagePull]: failed to pull image registry.k8s.io/kube-proxy:v1.25.0: output: E0625 10:55:55.302433 7311 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/kube-proxy:v1.25.0\": failed to resolve reference \"registry.k8s.io/kube-proxy:v1.25.0\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/kube-proxy/manifests/v1.25.0\": dial tcp 142.251.188.82:443: connect: connection refused" image="registry.k8s.io/kube-proxy:v1.25.0" time="2025-06-25T10:55:55+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/kube-proxy:v1.25.0\": failed to resolve reference \"registry.k8s.io/kube-proxy:v1.25.0\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/kube-proxy/manifests/v1.25.0\": dial tcp 142.251.188.82:443: connect: connection refused" , error: exit status 1 [ERROR ImagePull]: failed to pull image registry.k8s.io/pause:3.8: output: E0625 10:57:46.470991 7353 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 142.251.188.82:443: connect: connection refused" image="registry.k8s.io/pause:3.8" time="2025-06-25T10:57:46+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 142.251.188.82:443: connect: connection refused" , error: exit status 1 [ERROR ImagePull]: failed to pull image registry.k8s.io/etcd:3.5.4-0: output: E0625 10:59:37.312592 7393 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/etcd:3.5.4-0\": failed to resolve reference \"registry.k8s.io/etcd:3.5.4-0\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/etcd/manifests/3.5.4-0\": dial tcp 142.251.188.82:443: connect: connection refused" image="registry.k8s.io/etcd:3.5.4-0" time="2025-06-25T10:59:37+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/etcd:3.5.4-0\": failed to resolve reference \"registry.k8s.io/etcd:3.5.4-0\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/etcd/manifests/3.5.4-0\": dial tcp 142.251.188.82:443: connect: connection refused" , error: exit status 1 [ERROR ImagePull]: failed to pull image registry.k8s.io/coredns/coredns:v1.9.3: output: E0625 11:01:26.957404 7476 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.9.3\": failed to resolve reference \"registry.k8s.io/coredns/coredns:v1.9.3\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/coredns/coredns/manifests/v1.9.3\": dial tcp 142.251.188.82:443: connect: connection refused" image="registry.k8s.io/coredns/coredns:v1.9.3" time="2025-06-25T11:01:26+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.9.3\": failed to resolve reference \"registry.k8s.io/coredns/coredns:v1.9.3\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/coredns/coredns/manifests/v1.9.3\": dial tcp 142.251.188.82:443: connect: connection refused" , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
最新发布
06-26
<think>我们正在解决kubeadm初始化时因镜像拉取失败导致的问题。用户提供的配置文件是1.25.0版本,而引用中给出的镜像是1.23.x版本,因此需要调整镜像版本为1.25.0。由于国内无法直接访问k8s.gcr.io等镜像仓库,我们需要使用国内镜像源(如阿里云)拉取镜像并重新打标签。步骤:1.列出kubeadmv1.25.0所需的镜像2.从国内镜像源拉取对应版本的镜像3.重新打标签为k8s.gcr.io下的镜像4.执行kubeadminit注意:在kubeadm1.25版本中,coredns镜像的仓库路径已经改变(从k8s.gcr.io/coredns/coredns变为k8s.gcr.io/coredns/coredns),但实际拉取时,coredns的镜像在阿里云上可能还是放在google_containers下,且没有二级目录。因此,我们需要特别处理coredns镜像。根据kubeadm1.25.0,我们列出所需镜像:使用命令:kubeadmconfigimageslist--kubernetes-versionv1.25.0但由于用户可能无法执行该命令(因为网络问题),我们可以手动列出:镜像列表如下(以v1.25.0为例):-kube-apiserver:v1.25.0-kube-controller-manager:v1.25.0-kube-scheduler:v1.25.0-kube-proxy:v1.25.0-pause:3.8-etcd:3.5.4-0-coredns/coredns:v1.9.3(注意:这里coredns的仓库路径是coredns/coredns)然而,在阿里云镜像仓库中,coredns镜像的路径可能是`coredns`而不是`coredns/coredns`。所以我们需要分别处理。具体步骤:1.首先,我们确定镜像列表(根据用户配置的kubernetesVersion:1.25.0):images=(kube-apiserver:v1.25.0kube-controller-manager:v1.25.0kube-scheduler:v1.25.0kube-proxy:v1.25.0pause:3.8etcd:3.5.4-0coredns:v1.9.3#注意:阿里云上可能是直接使用coredns,而不是coredns/coredns)2.从阿里云拉取镜像并重新打标签注意:阿里云镜像仓库地址为:registry.aliyuncs.com/google_containers3.对于coredns,我们需要打两个标签:-首先拉取registry.aliyuncs.com/google_containers/coredns:v1.9.3-然后打标签为k8s.gcr.io/coredns/coredns:v1.9.34.其他镜像直接打标签为k8s.gcr.io/<镜像名>:<标签>但是,在k8s.gcr.io中,etcd镜像的仓库路径是`etcd`,而阿里云上也是`etcd`,所以不需要特别处理路径。因此,我们执行以下命令:```bashimages=(kube-apiserver:v1.25.0kube-controller-manager:v1.25.0kube-scheduler:v1.25.0kube-proxy:v1.25.0pause:3.8etcd:3.5.4-0coredns:v1.9.3)forimageNamein${images[@]};do#从阿里云拉取镜像dockerpullregistry.aliyuncs.com/google_containers/$imageName#打标签if["$imageName"="coredns:v1.9.3"];thendockertagregistry.aliyuncs.com/google_containers/$imageNamek8s.gcr.io/coredns/coredns:v1.9.3elsedockertagregistry.aliyuncs.com/google_containers/$imageNamek8s.gcr.io/$imageNamefidone```5.拉取完成后,再执行kubeadminit。注意:如果用户已经执行过kubeadminit失败,需要先重置:kubeadmreset-f然后执行初始化。另外,用户提供的配置文件(/tmp/test-config.yaml)中,advertiseAddress是1.2.3.4,需要根据实际情况修改为正确的IP地址。因此,我们可以使用配置文件进行初始化:kubeadminit--config/tmp/test-config.yaml如果用户不需要配置文件,可以直接使用命令初始化,但需要指定参数(如apiserver-advertise-address等)。总结步骤:步骤1:重置环境(如果之前初始化失败)步骤2:拉取镜像并打标签步骤3:执行kubeadminit注意:在拉取镜像前,确保docker服务已启动。另外,如果用户使用containerd作为容器运行时,那么需要使用ctr命令导入镜像?但这里用户配置的是containerd(criSocket:unix:///var/run/containerd/containerd.sock),而上面的命令使用的是docker。因此,我们需要调整:方案一:使用docker拉取然后导入到containerd(比较麻烦)方案二:直接使用containerd从阿里云拉取,然后打标签。由于用户可能同时安装了docker和containerd,但kubeadm使用的是containerd。所以,我们建议直接使用containerd命令来操作。但是,使用containerd操作镜像的命令(ctr)需要指定命名空间,且默认命名空间不是k8s.io,所以需要特别指定。因此,我们推荐使用nerdctl(如果已安装)或者使用ctr命令。使用ctr命令操作:```bashimages=(kube-apiserver:v1.25.0kube-controller-manager:v1.25.0kube-scheduler:v1.25.0kube-proxy:v1.25.0pause:3.8etcd:3.5.4-0coredns:v1.9.3)forimageNamein${images[@]};do#使用ctr从阿里云拉取镜像,并直接导入到containerd的k8s.io命名空间ctr-nk8s.ioimagespullregistry.aliyuncs.com/google_containers/$imageName#打标签if["$imageName"="coredns:v1.9.3"];thenctr-nk8s.ioimagestagregistry.aliyuncs.com/google_containers/$imageNamek8s.gcr.io/coredns/coredns:v1.9.3elsectr-nk8s.ioimagestagregistry.aliyuncs.com/google_containers/$imageNamek8s.gcr.io/$imageNamefidone```如果ctr命令不可用,或者用户更熟悉docker,也可以使用docker拉取,然后通过工具将镜像导入到containerd。但这样比较麻烦。考虑到用户可能已经配置了docker和containerd,并且containerd可以访问docker拉取的镜像?这取决于containerd的配置(如果containerd配置了可以访问docker镜像,那么不需要额外操作)。但默认情况下,containerd和docker的镜像存储是分开的。因此,为了简单起见,我们建议使用ctr命令直接操作containerd的镜像。如果用户没有ctr命令,或者希望使用docker,那么可以这样:1.使用docker拉取并打标签(如上所述)2.将docker镜像导出,然后导入到containerd中(使用dockersave和ctrimagesimport)但这样步骤较多,容易出错。所以推荐直接使用ctr。因此,我们给出两种方案,让用户选择。考虑到用户是root用户,并且已经配置了containerd作为CRI,我们优先使用ctr命令。步骤总结:1.重置(如果之前有失败尝试)sudokubeadmreset-f2.使用ctr拉取镜像并打标签(如上ctr命令)3.执行初始化sudokubeadminit--config/tmp/test-config.yaml注意:如果用户没有安装ctr,则ctr命令在containerd的安装包中。通常安装containerd时会自带ctr。如果找不到ctr,可能需要安装containerd。另外,如果用户不想使用ctr,也可以使用nerdctl(如果安装了),命令类似docker。如果用户只有docker,那么可以这样操作:```bash#使用docker拉取images=(kube-apiserver:v1.25.0kube-controller-manager:v1.25.0kube-scheduler:v1.25.0kube-proxy:v1.25.0pause:3.8etcd:3.5.4-0coredns:v1.9.3)forimageNamein${images[@]};dodockerpullregistry.aliyuncs.com/google_containers/$imageNameif["$imageName"="coredns:v1.9.3"];thendockertagregistry.aliyuncs.com/google_containers/$imageNamek8s.gcr.io/coredns/coredns:v1.9.3elsedockertagregistry.aliyuncs.com/google_containers/$imageNamek8s.gcr.io/$imageNamefidone#将镜像导入到containerdforimageNamein${images[@]};doif["$imageName"="coredns:v1.9.3"];thendockersavek8s.gcr.io/coredns/coredns:v1.9.3-ocoredns.tarctr-nk8s.ioimagesimportcoredns.tarrm-fcoredns.tarelsedockersavek8s.gcr.io/$imageName-o${imageName%%:*}.tarctr-nk8s.ioimagesimport${imageName%%:*}.tarrm-f${imageName%%:*}.tarfidone```这种方法比较繁琐,因此我们优先推荐使用ctr直接拉取。最后,我们给出一个完整的解决方案,以使用ctr为例:步骤:1.重置集群(如果之前有初始化失败)sudokubeadmreset-f2.清理旧镜像(可选)sudoctr-nk8s.ioimagesrm$(sudoctr-nk8s.ioimagesls-q)3.使用ctr拉取镜像并打标签(注意:需要sudo,因为containerd服务通常需要root权限)4.执行初始化注意:如果用户在使用ctr时遇到权限问题,请使用sudo。因此,我们提供完整的命令如下:```bash#重置sudokubeadmreset-f#定义镜像列表images=(kube-apiserver:v1.25.0kube-controller-manager:v1.25.0kube-scheduler:v1.25.0kube-proxy:v1.25.0pause:3.8etcd:3.5.4-0coredns:v1.9.3)#使用ctr拉取并打标签forimageNamein${images[@]};dosudoctr-nk8s.ioimagespullregistry.aliyuncs.com/google_containers/$imageNameif["$imageName"=="coredns:v1.9.3"];thensudoctr-nk8s.ioimagestagregistry.aliyuncs.com/google_containers/$imageNamek8s.gcr.io/coredns/coredns:v1.9.3elsesudoctr-nk8s.ioimagestagregistry.aliyuncs.com/google_containers/$imageNamek8s.gcr.io/$imageNamefidone#初始化集群sudokubeadminit--config/tmp/test-config.yaml```这样,应该可以解决镜像拉取失败的问题。注意:如果阿里云镜像仓库中某个镜像不存在(比如版本号不对),需要检查阿里云镜像仓库中的实际版本。例如,coredns镜像在阿里云上可能没有v1.9.3,而是其他版本?我们可以先查看阿里云镜像仓库的标签。如果遇到某个镜像拉取失败,可以手动到阿里云容器镜像服务搜索:https://cr.console.aliyun.com/cn-hangzhou/instances/images,搜索google_containers,然后查看对应的镜像标签。根据经验,阿里云镜像仓库的google_containers项目一般会同步k8s.gcr.io的镜像,但可能会有延迟。如果确实没有所需版本,可以考虑使用其他镜像源,比如中科大的镜像源。中科大镜像源地址:`registry.cn-hangzhou.aliyuncs.com`也可以用`registry.aliyuncs.com`,这两个都是阿里云的。另外,还有中科大的镜像:`registry.cn-beijing.aliyuncs.com`(这个不一定有)。如果阿里云没有,可以尝试使用:gcr.azk8s.cn/google_containers(AzureChinaMirror)但注意azk8s.cn已经停止维护,所以不建议使用。目前比较稳定的还是阿里云。因此,我们按照上述步骤操作即可。最后,如果用户执行初始化时仍然遇到问题,可以查看日志:`journalctl-xeukubelet`。我们给出完整的解决方案。</think>### 解决 Kubernetes 初始化镜像拉取失败问题 在使用 `kubeadm init` 初始化 Kubernetes 集群时,国内用户常因无法访问 `k8s.gcr.io` 等镜像仓库导致失败。以下是完整解决方案: --- ### 核心解决步骤 #### 1. 查看所需镜像列表 ```bash kubeadm config images list --kubernetes-version=v1.25.0 ``` 输出示例: ``` k8s.gcr.io/kube-apiserver:v1.25.0 k8s.gcr.io/kube-controller-manager:v1.25.0 k8s.gcr.io/kube-scheduler:v1.25.0 k8s.gcr.io/kube-proxy:v1.25.0 k8s.gcr.io/pause:3.8 k8s.gcr.io/etcd:3.5.4-0 k8s.gcr.io/coredns/coredns:v1.9.3 # 注意路径变化 ``` #### 2. 从国内镜像源拉取并重命名 ```bash # 定义镜像列表(适配v1.25.0) images=( kube-apiserver:v1.25.0 kube-controller-manager:v1.25.0 kube-scheduler:v1.25.0 kube-proxy:v1.25.0 pause:3.8 etcd:3.5.4-0 coredns:v1.9.3 # 阿里云无二级目录 ) # 批量拉取并重命名 for image in ${images[@]}; do # 从阿里云拉取 docker pull registry.aliyuncs.com/google_containers/$image # 特殊处理coredns路径 if [[ $image == coredns* ]]; then docker tag registry.aliyuncs.com/google_containers/$image k8s.gcr.io/coredns/coredns:v1.9.3 else docker tag registry.aliyuncs.com/google_containers/$image k8s.gcr.io/$image fi done ``` #### 3. 验证镜像准备状态 ```bash docker images | grep 'k8s.gcr.io' ``` 预期输出应包含所有核心组件镜像。 #### 4. 执行集群初始化 ```bash sudo kubeadm init --config /tmp/test-config.yaml ``` --- ### 关键注意事项 1. **版本匹配** - 必须确保 `kubeadm` 版本与配置文件中 `kubernetesVersion` 一致 - 示例中使用的 `v1.25.0` 需与实际版本匹配[^1] 2. **coredns路径变化** Kubernetes 1.25+ 中 coredns 镜像路径已改为 `k8s.gcr.io/coredns/coredns`,而阿里云仓库仍为单级目录[^1] 3. **containerd运行时特殊处理** 若使用 containerd 作为容器运行时: ```bash # 导入镜像到containerd命名空间 sudo ctr -n k8s.io images import <image.tar> ``` --- ### 替代镜像源方案 若阿里云镜像不可用,可使用以下备用源: ```bash # 中科大镜像源 for image in ${images[@]}; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$image docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$image k8s.gcr.io/$image done ``` --- ### 相关问题 1. 如何验证 Kubernetes 集群初始化后各组件是否正常运行?[^1] 2. `kubeadm init` 成功后如何安装网络插件(如 Flannel/Calico)?[^1] 3. 当节点状态显示 `NotReady` 时应如何排查?[^1] 4. 如何为 Kubernetes 配置持久化存储(PV/PVC)?[^1] 5. `kubeadm reset` 后如何彻底清理残留的容器和网络配置?[^1] > 提示:初始化前务必执行 `kubeadm config images pull` 测试镜像拉取能力[^1]。若仍遇特定镜像拉取问题,可手动下载离线镜像包导入。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值