Failed to active swapfile:/swapfile

Linux:Failed to active swapfile:/swapfile

  有的时候,在对 Linux 经过一同猛如虎的操作之后,发现无法启动了,然后又手贱长按电源键强制将其关机,然后再次启动的时候就无法启动了。

 &emsp本次遇到的问题是在经过上述描述的步骤之后,启动系统时得到了类似下面的错误:

[Failed] Failed to active swapfile :/swapfile

 对于这个问题,查阅了相关的资料后得知,是这个 /swapfile 文件发生了混乱,导致这个文件系统变成了一个只读文件系统,也有相关的解决措施,但是经过实践,这些方法都不太实用。

 最后我的解决办法是将整个系统给重新安装一次,这种使用软件管理的方式重新安装系统不会对原有的用户文件进行删除操作。

  1. 首先按下 Ctrl + Alt + F1F5,进入 tty 界面。可能有的没有锁住 Fn 的需要同时按下 Fn
  2. 登录账户,一般账户名是 tty 提示输入的 @ 前一部分
  3. 由于我的 Linux 发行版是 Ubuntu,因此我输入的重新安装命令是
sudo apt-get install --reinstall ubuntu-desktop
  1. 最后重新启动解决了该问题

 尽管最后解决了该问题,但是还是要注意不要随便长按电源键强制关机,这可能会引起一系列的让人头疼的问题。

nvidia@nvidia-desktop:~$ sudo systemctl status kubelet [sudo] password for nvidia: nvidSorry, try again. [sudo] password for nvidia: Sorry, try again. [sudo] password for nvidia: ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since Mon 2025-12-08 14:15:59 CST; 2s ago Docs: https://kubernetes.io/docs/ Process: 272923 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE) Main PID: 272923 (code=exited, status=1/FAILURE) nvidia@nvidia-desktop:~$ sudo systemctl start kubelet nvidia@nvidia-desktop:~$ sudo systemctl enable kubelet nvidia@nvidia-desktop:~$ sudo systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since Mon 2025-12-08 14:16:20 CST; 6s ago Docs: https://kubernetes.io/docs/ Main PID: 275262 (code=exited, status=1/FAILURE) Tasks: 0 (limit: 8482) Memory: 0B CGroup: /system.slice/kubelet.service nvidia@nvidia-desktop:~$ sudo systemctl status docker cri-dockerd Unit cri-dockerd.service could not be found. ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2025-12-08 14:03:36 CST; 13min ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID: 187514 (dockerd) Tasks: 12 Memory: 36.7M CGroup: /system.slice/docker.service └─187514 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 12月 08 14:03:35 nvidia-desktop dockerd[187514]: time="2025-12-08T14:03:35.898617157+08:00" level=info msg="Starting up" 12月 08 14:03:35 nvidia-desktop dockerd[187514]: time="2025-12-08T14:03:35.900171330+08:00" level=info msg="detected 127.0.0.53 nameserver, assuming system> 12月 08 14:03:35 nvidia-desktop dockerd[187514]: time="2025-12-08T14:03:35.927868067+08:00" level=info msg="[graphdriver] trying configured driver: overlay> 12月 08 14:03:35 nvidia-desktop dockerd[187514]: time="2025-12-08T14:03:35.947923542+08:00" level=info msg="Loading containers: start." 12月 08 14:03:36 nvidia-desktop dockerd[187514]: time="2025-12-08T14:03:36.062852804+08:00" level=info msg="Default bridge (docker0) is assigned with an IP> 12月 08 14:03:36 nvidia-desktop dockerd[187514]: time="2025-12-08T14:03:36.111319236+08:00" level=info msg="Loading containers: done." 12月 08 14:03:36 nvidia-desktop dockerd[187514]: time="2025-12-08T14:03:36.129904507+08:00" level=info msg="Docker daemon" commit="26.1.3-0ubuntu1~20.04.1"> 12月 08 14:03:36 nvidia-desktop dockerd[187514]: time="2025-12-08T14:03:36.130052926+08:00" level=info msg="Daemon has completed initialization" 12月 08 14:03:36 nvidia-desktop dockerd[187514]: time="2025-12-08T14:03:36.170906833+08:00" level=info msg="API listen on /run/docker.sock" 12月 08 14:03:36 nvidia-desktop systemd[1]: Started Docker Application Container Engine. [1]+ Stopped sudo systemctl status docker cri-dockerd nvidia@nvidia-desktop:~$ # 强制重置节点(指定CRI socket) nvidia@nvidia-desktop:~$ sudo kubeadm reset --force --cri-socket unix:///var/run/cri-dockerd.sock [preflight] Running pre-flight checks W1208 14:16:52.493374 279014 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] Deleted contents of the etcd data directory: /var/lib/etcd [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file. nvidia@nvidia-desktop:~$ nvidia@nvidia-desktop:~$ # 清理残留文件 nvidia@nvidia-desktop:~$ sudo rm -rf /etc/kubernetes /var/lib/kubelet /var/lib/etcd nvidia@nvidia-desktop:~$ sudo rm -f /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf nvidia@nvidia-desktop:~$ # 单行执行避免格式错误 nvidia@nvidia-desktop:~$ sudo kubeadm join 192.168.255.178:6443 --token 7kaxfs.m1xlw2b4lloj20h5 --discovery-token-ca-cert-hash sha256:da23339b60f174e254da99411e5799c83e2cc2e103e65dfffececca06ca8d366 --cri-socket unix:///var/run/cri-dockerd.sock [preflight] Running pre-flight checks [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
最新发布
12-09
[root@localhost ~]# sudo yum install -y kubelet-1.23.17-0 kubeadm-1.23.17-0 kubectl-1.23.17-0 --disableexcludes=kubernetes 已加载插件:fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to packages.cloud.google.com:443; 拒绝连接" 正在尝试其它镜像。 https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to packages.cloud.google.com:443; 拒绝连接" 正在尝试其它镜像。 https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to packages.cloud.google.com:443; 拒绝连接" 正在尝试其它镜像。 https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to packages.cloud.google.com:443; 拒绝连接" 正在尝试其它镜像。 https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to packages.cloud.google.com:443; 拒绝连接" 正在尝试其它镜像。 https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to packages.cloud.google.com:443; 拒绝连接" 正在尝试其它镜像。 https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#35 - "Encountered end of file" 正在尝试其它镜像。 One of the configured repositories failed (Kubernetes), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=kubernetes ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable kubernetes or subscription-manager repos --disable=kubernetes 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=kubernetes.skip_if_unavailable=true failure: repodata/repomd.xml from kubernetes: [Errno 256] No more mirrors to try. https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to packages.cloud.google.com:443; 拒绝连接" https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to packages.cloud.google.com:443; 拒绝连接" https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to packages.cloud.google.com:443; 拒绝连接" https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to packages.cloud.google.com:443; 拒绝连接" https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to packages.cloud.google.com:443; 拒绝连接" https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to packages.cloud.google.com:443; 拒绝连接" https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#35 - "Encountered end of file" [root@localhost ~]# [root@localhost ~]# # 启动 kubelet [root@localhost ~]# sudo systemctl enable kubelet Failed to execute operation: No such file or directory [root@localhost ~]# sudo systemctl start kubelet Failed to start kubelet.service: Unit not found.
07-30
评论 4
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值