iptables: No config file. [WARNING]!

欢迎访问我的个人博客网站:http://www.yanmin99.com/

iptables开启警告 iptables: No config file. [WARNING]!
  • 1、查看是/etc/sysconfig/iptables否存在

    [root@iZ2ze3nze2sw4quaa8xor9Z sysconfig]# cd /etc/sysconfig/ && ls
    atd         clock      grub              iptables-config  modules     network-scripts  ntpdate        sandbox    sysstat                     udev
    auditd      console    i18n              irqbalance       netconsole  nginx            raid-check     saslauthd  sysstat.ioconf
    authconfig  crond      init              kernel           network     nginx-debug      readonly-root  selinux    system-config-firewall
    cbq         firstboot  ip6tables-config  keyboard         networking  ntpd             rsyslog        sshd       system-config-firewall.old

    出现上面错误,应该就是iptables不存在,如果不存在就生成iptables文件

  • 2、随便写一条iptables命令配置个防火墙规则,生成iptables文件
    。如:iptables -P OUTPUT ACCEPT

    iptables -P OUTPUT ACCEPT
  • 3、service iptables save保存

    [root@iZ2ze3nze2sw4quaa8xor9Z sysconfig]# service iptables save
    iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]
    
    [root@iZ2ze3nze2sw4quaa8xor9Z sysconfig]# ls
    atd         console    init              kernel      networking       ntpdate        saslauthd       system-config-firewall
    auditd      crond      ip6tables-config  keyboard    network-scripts  raid-check     selinux         system-config-firewall.old
    authconfig  firstboot  iptables          modules     nginx            readonly-root  sshd            udev
    cbq         grub       iptables-config   netconsole  nginx-debug      rsyslog        sysstat
    clock       i18n       irqbalance        network     ntpd             sandbox        sysstat.ioconf
  • 4、启动防火墙

    service iptables start //启动
    service iptables restart//重启
    service iptables stop //停止
    service iptables status //查看状态
huitian@k8s-control-1:~$ sudo kubeadm init --control-plane-endpoint "10.32.11.220:16443" --upload-certs --cri-socket=unix:///var/run/cri-dockerd.sock --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.32.11.221 --v=5 I0814 13:49:46.690237 652314 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd" I0814 13:49:46.698002 652314 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt I0814 13:49:47.707588 652314 version.go:256] remote version is much newer: v1.33.4; falling back to: stable-1.30 I0814 13:49:47.707663 652314 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.30.txt [init] Using Kubernetes version: v1.30.14 [preflight] Running pre-flight checks I0814 13:49:48.755580 652314 checks.go:561] validating Kubernetes and kubeadm version I0814 13:49:48.755646 652314 checks.go:166] validating if the firewall is enabled and active I0814 13:49:48.767478 652314 checks.go:201] validating availability of port 6443 I0814 13:49:48.767600 652314 checks.go:201] validating availability of port 10259 I0814 13:49:48.767619 652314 checks.go:201] validating availability of port 10257 I0814 13:49:48.767633 652314 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml I0814 13:49:48.767646 652314 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml I0814 13:49:48.767657 652314 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml I0814 13:49:48.767663 652314 checks.go:278] validating the existence of file /etc/kubernetes/manifests/etcd.yaml I0814 13:49:48.767674 652314 checks.go:428] validating if the connectivity type is via proxy or direct I0814 13:49:48.767692 652314 checks.go:467] validating http connectivity to first IP address in the CIDR I0814 13:49:48.767709 652314 checks.go:467] validating http connectivity to first IP address in the CIDR I0814 13:49:48.767720 652314 checks.go:102] validating the container runtime I0814 13:49:48.798500 652314 checks.go:637] validating whether swap is enabled or not I0814 13:49:48.798630 652314 checks.go:368] validating the presence of executable crictl I0814 13:49:48.798696 652314 checks.go:368] validating the presence of executable conntrack I0814 13:49:48.798741 652314 checks.go:368] validating the presence of executable ip I0814 13:49:48.798793 652314 checks.go:368] validating the presence of executable iptables I0814 13:49:48.798838 652314 checks.go:368] validating the presence of executable mount I0814 13:49:48.798883 652314 checks.go:368] validating the presence of executable nsenter I0814 13:49:48.798925 652314 checks.go:368] validating the presence of executable ethtool I0814 13:49:48.798963 652314 checks.go:368] validating the presence of executable tc I0814 13:49:48.799002 652314 checks.go:368] validating the presence of executable touch I0814 13:49:48.799089 652314 checks.go:514] running all checks I0814 13:49:48.815214 652314 checks.go:399] checking whether the given node name is valid and reachable using net.LookupHost I0814 13:49:48.815243 652314 checks.go:603] validating kubelet version I0814 13:49:48.864298 652314 checks.go:128] validating if the "kubelet" service is enabled and active I0814 13:49:48.892228 652314 checks.go:201] validating availability of port 10250 I0814 13:49:48.892345 652314 checks.go:327] validating the contents of file /proc/sys/net/ipv4/ip_forward I0814 13:49:48.892428 652314 checks.go:201] validating availability of port 2379 I0814 13:49:48.892477 652314 checks.go:201] validating availability of port 2380 I0814 13:49:48.892525 652314 checks.go:241] validating the existence and emptiness of directory /var/lib/etcd [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0814 13:49:48.892803 652314 checks.go:830] using image pull policy: IfNotPresent I0814 13:49:48.952669 652314 checks.go:862] image exists: registry.k8s.io/kube-apiserver:v1.30.14 I0814 13:49:48.983216 652314 checks.go:862] image exists: registry.k8s.io/kube-controller-manager:v1.30.14 I0814 13:49:49.008115 652314 checks.go:862] image exists: registry.k8s.io/kube-scheduler:v1.30.14 I0814 13:49:49.037284 652314 checks.go:862] image exists: registry.k8s.io/kube-proxy:v1.30.14 I0814 13:49:49.066411 652314 checks.go:862] image exists: registry.k8s.io/coredns/coredns:v1.11.3 I0814 13:49:49.092011 652314 checks.go:862] image exists: registry.k8s.io/pause:3.9 I0814 13:49:49.121953 652314 checks.go:862] image exists: registry.k8s.io/etcd:3.5.15-0 [certs] Using certificateDir folder "/etc/kubernetes/pki" I0814 13:49:49.122018 652314 certs.go:112] creating a new certificate authority for ca [certs] Generating "ca" certificate and key I0814 13:49:49.253708 652314 certs.go:483] validating certificate period for ca certificate [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-control-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.32.11.221 10.32.11.220] [certs] Generating "apiserver-kubelet-client" certificate and key I0814 13:49:49.585437 652314 certs.go:112] creating a new certificate authority for front-proxy-ca [certs] Generating "front-proxy-ca" certificate and key I0814 13:49:49.868021 652314 certs.go:483] validating certificate period for front-proxy-ca certificate [certs] Generating "front-proxy-client" certificate and key I0814 13:49:49.955381 652314 certs.go:112] creating a new certificate authority for etcd-ca [certs] Generating "etcd/ca" certificate and key I0814 13:49:50.090629 652314 certs.go:483] validating certificate period for etcd/ca certificate [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-control-1 localhost] and IPs [10.32.11.221 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-control-1 localhost] and IPs [10.32.11.221 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key I0814 13:49:50.787579 652314 certs.go:78] creating new public/private key files for signing service account users [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0814 13:49:50.912204 652314 kubeconfig.go:112] creating kubeconfig file for admin.conf W0814 13:49:50.912354 652314 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "admin.conf" kubeconfig file I0814 13:49:51.069239 652314 kubeconfig.go:112] creating kubeconfig file for super-admin.conf W0814 13:49:51.069386 652314 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "super-admin.conf" kubeconfig file I0814 13:49:51.312214 652314 kubeconfig.go:112] creating kubeconfig file for kubelet.conf W0814 13:49:51.312361 652314 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "kubelet.conf" kubeconfig file I0814 13:49:51.504372 652314 kubeconfig.go:112] creating kubeconfig file for controller-manager.conf W0814 13:49:51.504522 652314 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0814 13:49:51.630717 652314 kubeconfig.go:112] creating kubeconfig file for scheduler.conf W0814 13:49:51.630861 652314 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0814 13:49:51.885607 652314 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" I0814 13:49:51.885642 652314 manifests.go:103] [control-plane] getting StaticPodSpecs I0814 13:49:51.885763 652314 certs.go:483] validating certificate period for CA certificate I0814 13:49:51.885812 652314 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-apiserver" I0814 13:49:51.885821 652314 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver" I0814 13:49:51.885825 652314 manifests.go:129] [control-plane] adding volume "etc-pki" for component "kube-apiserver" I0814 13:49:51.885828 652314 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" I0814 13:49:51.885833 652314 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver" I0814 13:49:51.885839 652314 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver" I0814 13:49:51.886435 652314 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" [control-plane] Creating static Pod manifest for "kube-controller-manager" I0814 13:49:51.886453 652314 manifests.go:103] [control-plane] getting StaticPodSpecs I0814 13:49:51.886569 652314 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" I0814 13:49:51.886579 652314 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager" I0814 13:49:51.886583 652314 manifests.go:129] [control-plane] adding volume "etc-pki" for component "kube-controller-manager" I0814 13:49:51.886589 652314 manifests.go:129] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" I0814 13:49:51.886593 652314 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" I0814 13:49:51.886596 652314 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" I0814 13:49:51.886602 652314 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager" I0814 13:49:51.886605 652314 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager" I0814 13:49:51.887135 652314 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [control-plane] Creating static Pod manifest for "kube-scheduler" I0814 13:49:51.887149 652314 manifests.go:103] [control-plane] getting StaticPodSpecs I0814 13:49:51.887264 652314 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" I0814 13:49:51.887560 652314 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" I0814 13:49:51.887574 652314 kubelet.go:68] Stopping the kubelet [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s [kubelet-check] The kubelet is healthy after 1.001514652s [api-check] Waiting for a healthy API server. This can take up to 4m0s 超时API
最新发布
08-15
[root@lf23es1csk8smaster1 ~]# kubeadm reset [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' W0321 14:07:39.537350 299601 utils.go:69] The recommended value for "readOnlyPort" in "KubeletConfiguration" is: 0; the provided value is: 10255 W0321 14:07:39.544927 299601 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y [preflight] Running pre-flight checks W0321 14:07:50.439856 299601 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory W0321 14:08:21.076907 299601 removeetcdmember.go:68] [reset] Failed to remove etcd member: could not retrieve the list of etcd endpoints: timed out waiting for the condition, please manually remove this etcd member using etcdctl [reset] Deleted contents of the etcd data directory: /var/lib/etcd [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of directories: [/etc/kubernetes/manifests /kubelet /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.
03-22
[root@controller ~]# tail -f /var/log/keystone/keystone.log 2025-03-04 00:04:10.596 12671 WARNING keystone.server.flask.application [req-1c53d976-a7a1-4f1e-87d9-79dcafb1c09d - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.10.20: Unauthorized: The request you have made requires authentication. 2025-03-04 00:04:17.469 12670 WARNING keystone.server.flask.application [req-fd7fc80d-27e0-46e8-a3f7-1c9144c85c1c - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.10.20: Unauthorized: The request you have made requires authentication. 2025-03-04 00:04:24.754 12667 WARNING keystone.server.flask.application [req-65436ff4-7a73-414b-94ac-1441bc602b5f - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.10.20: Unauthorized: The request you have made requires authentication. 2025-03-04 00:04:32.011 12670 WARNING keystone.server.flask.application [req-4348f851-0448-4232-a46e-3b18bd87e86a - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.10.20: Unauthorized: The request you have made requires authentication. 2025-03-04 00:04:39.006 12671 WARNING keystone.server.flask.application [req-40adc401-95ea-483b-b3a8-69de9bd7069f - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.10.20: Unauthorized: The request you have made requires authentication. 2025-03-04 00:04:45.920 12670 WARNING keystone.server.flask.application [req-8a3f78af-1eb0-46d6-a556-4d33aec8aafd - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.10.20: Unauthorized: The request you have made requires authentication. 2025-03-04 00:04:53.162 12667 WARNING keystone.server.flask.application [req-b313fefe-4b54-445c-bdb2-82201f9cec84 - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.10.20: Unauthorized: The request you have made requires authentication. 2025-03-04 00:04:59.945 12671 WARNING keystone.server.flask.application [req-8af88323-e340-46a8-8738-b4a25b201c0b - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.10.20: Unauthorized: The request you have made requires authentication. 2025-03-04 00:05:06.678 12669 WARNING keystone.server.flask.application [req-98ac090a-fe94-4222-b63e-436c1caf439f - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.10.20: Unauthorized: The request you have made requires authentication. 2025-03-04 00:05:13.569 12668 WARNING keystone.server.flask.application [req-62c6f31a-1180-4d64-a1ce-62a0ee1ba03b - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.10.20: Unauthorized: The request you have made requires authentication.
03-08
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值