error: "net.ipv4.ip_conntrack_max" is an unknown key

本文介绍了解决在使用stsctl命令时遇到的net.ipv4.ip_conntrack_max及net.ipv4.netfilter.ip_conntrack_tcp_timeout_established未知键错误的方法。通过加载ip_conntrack模块并确保其在系统启动时自动加载来解决问题。

转载自: http://hi.baidu.com/137039277/item/876148ea6e56d214585dd8d7

 

error: "net.ipv4.ip_conntrack_max" is an unknown key

 

问题:stsctl -p

error: "net.ipv4.ip_conntrack_max" is an unknown key
error: "net.ipv4.netfilter.ip_conntrack_tcp_timeout_established" is an unknown key

解决:

modprobe ip_conntrack

echo "modprobe ip_conntrack" >> /etc/rc.local

modprobe(module probe)

功能说明:自动处理可载入模块。

语  法:modprobe [-acdlrtvV][--help][模块文件][符号名称 = 符号值]

补充说明:modprobe可载入指定的个别模块,或是载入一组相依的模块。modprobe会根据depmod所产生的相依关系,决定要载入哪些模块。若在载入过程中发生错误,在modprobe会卸载整组的模块。

参  数:
   -a或--all  载入全部的模块。 
   -c或--show-conf  显示所有模块的设置信息。 
   -d或--debug  使用排错模式。 
   -l或--list  显示可用的模块。 
   -r或--remove  模块闲置不用时,即自动卸载模块。 
   -t或--type  指定模块类型。 
   -v或--verbose  执行时显示详细的信息。 
   -V或--version  显示版本信息。 
   -help  显示帮助。

[root@node4 ~]# kubeadm reset --force [preflight] Running pre-flight checks W0923 10:54:42.908233 30672 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni] The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file. [root@node4 ~]# rm -rf /etc/kubernetes /var/lib/kubelet [root@node4 ~]# systemctl stop kubelet [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/etc/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/docker.service.d └─docker-dns.conf, docker-options.conf Active: active (running) since 二 2025-09-23 10:36:29 CST; 18min ago Docs: http://docs.docker.com Main PID: 16450 (dockerd) Tasks: 22 Memory: 54.9M CGroup: /system.slice/docker.service └─16450 /usr/bin/dockerd --iptables=false --exec-opt native.cgroupdriver=systemd --data-root=/var/lib/docker --log-opt max-size=50m --log-opt max-file=5 --dns 10.233.0.3 --dns 202.101.224.68 --dns 202.101.224.69 --dns-search default.svc.cluster.local --dns... 9月 23 10:36:28 node4 dockerd[16450]: time="2025-09-23T10:36:28.714625957+08:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc 9月 23 10:36:28 node4 dockerd[16450]: time="2025-09-23T10:36:28.755430797+08:00" level=info msg="[graphdriver] using prior storage driver: overlay2" 9月 23 10:36:28 node4 dockerd[16450]: time="2025-09-23T10:36:28.954323711+08:00" level=info msg="Loading containers: start." 9月 23 10:36:29 node4 dockerd[16450]: time="2025-09-23T10:36:29.206033565+08:00" level=info msg="Fixing inconsistent endpoint_cnt for network none. Expected=0, Actual=12" 9月 23 10:36:29 node4 dockerd[16450]: time="2025-09-23T10:36:29.337817459+08:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" 9月 23 10:36:29 node4 dockerd[16450]: time="2025-09-23T10:36:29.495200799+08:00" level=info msg="Loading containers: done." 9月 23 10:36:29 node4 dockerd[16450]: time="2025-09-23T10:36:29.800389989+08:00" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17 9月 23 10:36:29 node4 dockerd[16450]: time="2025-09-23T10:36:29.800482890+08:00" level=info msg="Daemon has completed initialization" 9月 23 10:36:29 node4 systemd[1]: Started Docker Application Container Engine. 9月 23 10:36:29 node4 dockerd[16450]: time="2025-09-23T10:36:29.952645573+08:00" level=info msg="API listen on /var/run/docker.sock" [root@node4 ~]# systemctl restart docker containerd [root@node4 ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/etc/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/docker.service.d └─docker-dns.conf, docker-options.conf Active: active (running) since 二 2025-09-23 10:55:37 CST; 5s ago Docs: http://docs.docker.com Main PID: 30797 (dockerd) Tasks: 21 Memory: 48.8M CGroup: /system.slice/docker.service └─30797 /usr/bin/dockerd --iptables=false --exec-opt native.cgroupdriver=systemd --data-root=/var/lib/docker --log-opt max-size=50m --log-opt max-file=5 --dns 10.233.0.3 --dns 202.101.224.68 --dns 202.101.224.69 --dns-search default.svc.cluster.local --dns... 9月 23 10:55:36 node4 dockerd[30797]: time="2025-09-23T10:55:36.636353414+08:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc 9月 23 10:55:36 node4 dockerd[30797]: time="2025-09-23T10:55:36.636357438+08:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc 9月 23 10:55:36 node4 dockerd[30797]: time="2025-09-23T10:55:36.687672605+08:00" level=info msg="[graphdriver] using prior storage driver: overlay2" 9月 23 10:55:36 node4 dockerd[30797]: time="2025-09-23T10:55:36.892456214+08:00" level=info msg="Loading containers: start." 9月 23 10:55:36 node4 dockerd[30797]: time="2025-09-23T10:55:36.998523887+08:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" 9月 23 10:55:37 node4 dockerd[30797]: time="2025-09-23T10:55:37.065087090+08:00" level=info msg="Loading containers: done." 9月 23 10:55:37 node4 dockerd[30797]: time="2025-09-23T10:55:37.107876319+08:00" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17 9月 23 10:55:37 node4 dockerd[30797]: time="2025-09-23T10:55:37.107948567+08:00" level=info msg="Daemon has completed initialization" 9月 23 10:55:37 node4 systemd[1]: Started Docker Application Container Engine. 9月 23 10:55:37 node4 dockerd[30797]: time="2025-09-23T10:55:37.253910996+08:00" level=info msg="API listen on /var/run/docker.sock" [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# ss -tulnp | grep 10248 [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# [root@node4 ~]# kubeadm join 192.168.1.12:6443 \ > --token n0higb.w0qcokvgv3521iqh \ > --discovery-token-ca-cert-hash sha256:f2a8a9e2b9ec276e0a3ead1fa393ad40a2657105d48b12ff21fc305cc680f87d \ > --v=5 I0923 10:56:30.172295 30964 join.go:395] [preflight] found NodeName empty; using OS hostname as NodeName I0923 10:56:30.172656 30964 initconfiguration.go:115] detected and using CRI socket: /var/run/dockershim.sock [preflight] Running pre-flight checks I0923 10:56:30.172719 30964 preflight.go:90] [preflight] Running general checks I0923 10:56:30.172974 30964 checks.go:250] validating the existence and emptiness of directory /etc/kubernetes/manifests I0923 10:56:30.172986 30964 checks.go:287] validating the existence of file /etc/kubernetes/kubelet.conf I0923 10:56:30.172993 30964 checks.go:287] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf I0923 10:56:30.173001 30964 checks.go:103] validating the container runtime I0923 10:56:30.239105 30964 checks.go:129] validating if the "docker" service is enabled and active I0923 10:56:30.292863 30964 checks.go:336] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables I0923 10:56:30.292993 30964 checks.go:336] validating the contents of file /proc/sys/net/ipv4/ip_forward I0923 10:56:30.293010 30964 checks.go:654] validating whether swap is enabled or not I0923 10:56:30.293175 30964 checks.go:377] validating the presence of executable conntrack I0923 10:56:30.293327 30964 checks.go:377] validating the presence of executable ip I0923 10:56:30.293414 30964 checks.go:377] validating the presence of executable iptables I0923 10:56:30.293575 30964 checks.go:377] validating the presence of executable mount I0923 10:56:30.293650 30964 checks.go:377] validating the presence of executable nsenter I0923 10:56:30.293660 30964 checks.go:377] validating the presence of executable ebtables I0923 10:56:30.293668 30964 checks.go:377] validating the presence of executable ethtool I0923 10:56:30.293676 30964 checks.go:377] validating the presence of executable socat I0923 10:56:30.293684 30964 checks.go:377] validating the presence of executable tc I0923 10:56:30.293692 30964 checks.go:377] validating the presence of executable touch I0923 10:56:30.293702 30964 checks.go:525] running all checks I0923 10:56:30.345060 30964 checks.go:408] checking whether the given node name is valid and reachable using net.LookupHost I0923 10:56:30.345344 30964 checks.go:623] validating kubelet version I0923 10:56:30.374493 30964 checks.go:129] validating if the "kubelet" service is enabled and active I0923 10:56:30.380422 30964 checks.go:202] validating availability of port 10250 I0923 10:56:30.380703 30964 checks.go:287] validating the existence of file /etc/kubernetes/pki/ca.crt I0923 10:56:30.380715 30964 checks.go:437] validating if the connectivity type is via proxy or direct I0923 10:56:30.380739 30964 join.go:465] [preflight] Discovering cluster-info I0923 10:56:30.380760 30964 token.go:78] [discovery] Created cluster-info discovery client, requesting info from "192.168.1.12:6443" I0923 10:56:30.406267 30964 token.go:116] [discovery] Requesting info from "192.168.1.12:6443" again to validate TLS against the pinned public key I0923 10:56:30.425034 30964 token.go:133] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.12:6443" I0923 10:56:30.425084 30964 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process I0923 10:56:30.425102 30964 join.go:479] [preflight] Fetching init configuration I0923 10:56:30.425118 30964 join.go:517] [preflight] Retrieving KubeConfig objects [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' I0923 10:56:30.468845 30964 interface.go:431] Looking for default routes with IPv4 addresses I0923 10:56:30.468870 30964 interface.go:436] Default route transits interface "p1p1" I0923 10:56:30.470524 30964 interface.go:208] Interface p1p1 is up I0923 10:56:30.470947 30964 interface.go:256] Interface "p1p1" has 2 addresses :[192.168.1.16/22 fe80::4e10:d5ff:fe52:a682/64]. I0923 10:56:30.470993 30964 interface.go:223] Checking addr 192.168.1.16/22. I0923 10:56:30.471022 30964 interface.go:230] IP found 192.168.1.16 I0923 10:56:30.471063 30964 interface.go:262] Found valid IPv4 address 192.168.1.16 for interface "p1p1". I0923 10:56:30.471126 30964 interface.go:442] Found active IP 192.168.1.16 W0923 10:56:30.471181 30964 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10] I0923 10:56:30.475012 30964 preflight.go:101] [preflight] Running configuration dependant checks I0923 10:56:30.475059 30964 controlplaneprepare.go:211] [download-certs] Skipping certs download I0923 10:56:30.475084 30964 kubelet.go:110] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf I0923 10:56:30.476555 30964 kubelet.go:118] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt I0923 10:56:30.477445 30964 kubelet.go:139] [kubelet-start] Checking for an existing Node in the cluster with name "node4" and status "Ready" I0923 10:56:30.488211 30964 kubelet.go:153] [kubelet-start] Stopping the kubelet [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. ^C [root@node4 ~]# mkdir -p /etc/kubernetes/pki [root@node4 ~]# ls /etc/kubernetes/pki/ca.crt.bak ls: 无法访问/etc/kubernetes/pki/ca.crt.bak: 没有那个文件或目录 [root@node4 ~]#
最新发布
09-24
[2025-07-31 02:11:27] [INFO] 尝试 1/5: 直接下载镜像 quay.mirrors.ustc.edu.cn/coreos/flannel:v0.22.0 Error: Get "https://quay.mirrors.ustc.edu.cn/v2/": dial tcp: lookup quay.mirrors.ustc.edu.cn on 223.5.5.5:53: no such host [2025-07-31 02:11:30] [INFO] 尝试 2/5: 直接下载镜像 quay.mirrors.ustc.edu.cn/coreos/flannel:v0.22.0 Error: Get "https://quay.mirrors.ustc.edu.cn/v2/": dial tcp: lookup quay.mirrors.ustc.edu.cn on 223.5.5.5:53: no such host [2025-07-31 02:11:37] [INFO] 尝试 3/5: 直接下载镜像 quay.mirrors.ustc.edu.cn/coreos/flannel:v0.22.0 Error: Get "https://quay.mirrors.ustc.edu.cn/v2/": dial tcp: lookup quay.mirrors.ustc.edu.cn on 223.5.5.5:53: no such host [2025-07-31 02:11:46] [INFO] 尝试 4/5: 直接下载镜像 quay.mirrors.ustc.edu.cn/coreos/flannel:v0.22.0 Error: Get "https://quay.mirrors.ustc.edu.cn/v2/": dial tcp: lookup quay.mirrors.ustc.edu.cn on 223.5.5.5:53: no such host [2025-07-31 02:11:59] [INFO] 尝试 5/5: 直接下载镜像 quay.mirrors.ustc.edu.cn/coreos/flannel:v0.22.0 Error: Get "https://quay.mirrors.ustc.edu.cn/v2/": dial tcp: lookup quay.mirrors.ustc.edu.cn on 223.5.5.5:53: no such host [2025-07-31 02:12:14] [ERROR] 直接下载镜像 quay.mirrors.ustc.edu.cn/coreos/flannel:v0.22.0 失败 (重试 5 次) [2025-07-31 02:12:14] [ERROR] 镜像下载失败: quay.mirrors.ustc.edu.cn/coreos/flannel:v0.22.0 [2025-07-31 02:12:14] [WARN] 镜像下载失败: quay.mirrors.ustc.edu.cn/coreos/flannel:v0.22.0 [2025-07-31 02:12:14] [INFO] 使用 crane 直接下载镜像: docker.io/flannelcni/flannel:v0.22.0 [2025-07-31 02:12:14] [INFO] 尝试 1/5: 直接下载镜像 docker.io/flannelcni/flannel:v0.22.0 Error: Get "https://index.docker.io/v2/": dial tcp 128.121.146.228:443: connect: connection refused [2025-07-31 02:12:38] [INFO] 尝试 2/5: 直接下载镜像 docker.io/flannelcni/flannel:v0.22.0 Error: Get "https://index.docker.io/v2/": dial tcp 128.121.146.228:443: connect: connection refused [2025-07-31 02:13:06] [INFO] 尝试 3/5: 直接下载镜像 docker.io/flannelcni/flannel:v0.22.0 Error: Get "https://index.docker.io/v2/": dial tcp 31.13.87.19:443: connect: connection refused [2025-07-31 02:13:36] [INFO] 尝试 4/5: 直接下载镜像 docker.io/flannelcni/flannel:v0.22.0 Error: Get "https://index.docker.io/v2/": dial tcp 31.13.87.19:443: connect: connection refused [2025-07-31 02:14:09] [INFO] 尝试 5/5: 直接下载镜像 docker.io/flannelcni/flannel:v0.22.0 Error: Get "https://index.docker.io/v2/": dial tcp 69.63.178.13:443: connect: connection refused [2025-07-31 02:14:45] [ERROR] 直接下载镜像 docker.io/flannelcni/flannel:v0.22.0 失败 (重试 5 次) [2025-07-31 02:14:46] [ERROR] 镜像下载失败: docker.io/flannelcni/flannel:v0.22.0 [2025-07-31 02:14:46] [WARN] 镜像下载失败: docker.io/flannelcni/flannel:v0.22.0 [2025-07-31 02:14:46] [WARN] 以下镜像下载失败: [2025-07-31 02:14:46] [WARN] - quay.mirrors.ustc.edu.cn/coreos/flannel:v0.22.0 [2025-07-31 02:14:46] [WARN] - docker.io/flannelcni/flannel:v0.22.0 [2025-07-31 02:14:46] [WARN] 请手动下载或检查网络连接 [2025-07-31 02:14:46] [INFO] ===== 下载系统优化工具 ===== [2025-07-31 02:14:46] [INFO] 工具已存在,跳过下载: cheat-linux-amd64.gz [2025-07-31 02:14:46] [INFO] 工具已存在,跳过下载: btop-x86_64-linux-musl.tbz [2025-07-31 02:14:46] [INFO] 工具已存在,跳过下载: bottom_x86_64-unknown-linux-gnu.tar.gz [2025-07-31 02:14:46] [INFO] 工具已存在,跳过下载: fd-v8.7.1-x86_64-unknown-linux-gnu.tar.gz [2025-07-31 02:14:46] [INFO] 尝试 1/5: 下载 bat-v0.24.0-x86极-unknown-linux-gnu.tar.gz #=O#- # # curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to ghproxy.com:443 根据错误重新生成完整、有效、可用的下载脚本、离线部署脚本、服务器系统优化脚本,并且包含系统依赖脚本,脚本要求可以重复执行。
08-01
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值