git global in c9.io

本文介绍如何在GitLab上配置用户、创建项目并进行基本的Git配置操作,包括设置用户名和邮箱、定义默认推送策略、创建别名等。
gitlab 添加用户 c9(可创建group)
c9 添加c9.io默认的id_rsa.pub 
c9 创建Group "c9.io"
c9 创建project "hello_app"

$ git config --global user.name "+ri"
$ git config --global user.email "+ri@c9.io"
$ git config --global push.default matching
$ git config --global alias.ch checkout
$ ...
$ git remote add origin git@git.+ri.com:c9-io/hello_app.git
$ git push origin master
Xshell 8 (Build 0082) Copyright (c) 2024 NetSarang Computer, Inc. All rights reserved. Type `help&#39; to learn how to use Xshell prompt. [C:\~]$ Connecting to 192.168.200.131:22... Connection established. To escape to local shell, press &#39;Ctrl+Alt+]&#39;. Last login: Wed Aug 27 04:37:33 2025 from 192.168.200.1 [yywz@localhost ~]$ su root 密码: [root@localhost yywz]# docker-compose up -d Creating network "yywz_default" with the default driver Pulling nginx (nginx:alpine)... ERROR: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) [root@localhost yywz]# sudo systemctl daemon-reload [root@localhost yywz]# sudo systemctl restart docker [root@localhost yywz]# docker-compose up -d Pulling nginx (nginx:alpine)... ERROR: Get "https://registry-1.docker.io/v2/": dial tcp 128.242.240.20:443: i/o timeout [root@localhost yywz]# sudo mkdir -p /etc/docker [root@localhost yywz]# sudo tee /etc/docker/daemon.json <<-&#39;EOF&#39; > { > "registry-mirrors": [ > "https://hub-mirror.c.163.com", > "https://mirror.baidubce.com", > "https://docker.mirrors.ustc.edu.cn", > "https://registry.docker-cn.com" > ], > "insecure-registries": [], > "debug": false > } > EOF { "registry-mirrors": [ "https://hub-mirror.c.163.com", "https://mirror.baidubce.com", "https://docker.mirrors.ustc.edu.cn", "https://registry.docker-cn.com" ], "insecure-registries": [], "debug": false } [root@localhost yywz]# docker-compose up -d Pulling nginx (nginx:alpine)... ERROR: Get "https://registry-1.docker.io/v2/": dial tcp 108.160.170.39:443: i/o timeout [root@localhost yywz]# cat /etc/docker/daemon.json { "registry-mirrors": [ "https://hub-mirror.c.163.com", "https://mirror.baidubce.com", "https://docker.mirrors.ustc.edu.cn", "https://registry.docker-cn.com" ], "insecure-registries": [], "debug": false } [root@localhost yywz]# systemctl restart docker.service [root@localhost yywz]# docker-compose up -d Pulling nginx (nginx:alpine)... ERROR: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) [root@localhost yywz]# sudo tee /etc/docker/daemon.json <<-&#39;EOF&#39; > { > "registry-mirrors": [ > "https://docker.211678.top", > "https://docker.1panel.live", > "https://hub.rat.dev", > "https://docker.m.daocloud.io", > "https://do.nark.eu.org", > "https://dockerpull.com", > "https://dockerproxy.cn", > "https://docker.awsl9527.cn" > ] > } > EOF { "registry-mirrors": [ "https://docker.211678.top", "https://docker.1panel.live", "https://hub.rat.dev", "https://docker.m.daocloud.io", "https://do.nark.eu.org", "https://dockerpull.com", "https://dockerproxy.cn", "https://docker.awsl9527.cn" ] } [root@localhost yywz]# sudo systemctl daemon-reload [root@localhost yywz]# systemctl restart docker.service [root@localhost yywz]# docker-compose up -d Pulling nginx (nginx:alpine)... alpine: Pulling from library/nginx 9824c27679d3: Pull complete 6bc572a340ec: Pull complete 403e3f251637: Pull complete 9adfbae99cb7: Pull complete 7a8a46741e18: Pull complete c9ebe2ff2d2c: Pull complete a992fbc61ecc: Pull complete cb1ff4086f82: Pull complete Digest: sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8 Status: Downloaded newer image for nginx:alpine Pulling redis (redis:alpine)... alpine: Pulling from library/redis 9824c27679d3: Already exists 9880d81ff87a: Pull complete 168694ef5d62: Pull complete f8eab6d4856e: Pull complete 1f79dac8d2d4: Pull complete 4f4fb700ef54: Pull complete 61cfb50eeff3: Pull complete Digest: sha256:987c376c727652f99625c7d205a1cba3cb2c53b92b0b62aade2bd48ee1593232 Status: Downloaded newer image for redis:alpine Creating yywz_nginx_1 ... done Creating yywz_redis_1 ... done [root@localhost yywz]# docker -v Docker version 20.10.24, build 297e128 [root@localhost yywz]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/docker.service.d └─timeout.conf Active: active (running) since 三 2025-08-27 15:14:00 CST; 4min 56s ago Docs: https://docs.docker.com Main PID: 9737 (dockerd) Tasks: 46 Memory: 138.6M CGroup: /system.slice/docker.service ├─ 9737 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ├─10144 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.19.0.2 -... ├─10151 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 80 -container-ip 172.19.0.2 -conta... ├─10170 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 6379 -container-ip 172.19.0.3... └─10178 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 6379 -container-ip 172.19.0.3 -con... 8月 27 15:14:00 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:00.853107542+08:00" level=inf....24 8月 27 15:14:00 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:00.853240645+08:00" level=inf...on" 8月 27 15:14:00 localhost.localdomain systemd[1]: Started Docker Application Container Engine. 8月 27 15:14:00 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:00.891163395+08:00" level=inf...ck" 8月 27 15:14:20 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:20.265981649+08:00" level=war...ut" 8月 27 15:14:20 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:20.266117106+08:00" level=inf...ut" 8月 27 15:14:49 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:49.034510681+08:00" level=war...ut" 8月 27 15:14:49 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:49.034659945+08:00" level=inf...ut" 8月 27 15:15:01 localhost.localdomain dockerd[9737]: time="2025-08-27T15:15:01+08:00" level=info msg="Fir...ng" 8月 27 15:15:01 localhost.localdomain dockerd[9737]: time="2025-08-27T15:15:01+08:00" level=info msg="Fir...ng" Hint: Some lines were ellipsized, use -l to show in full. [root@localhost yywz]# /opt bash: /opt: 是一个目录 [root@localhost yywz]# cd /opt [root@localhost opt]# ll 总用量 178812 -rw-r--r--. 1 root root 129115976 3月 13 15:23 boot.bak0.bz2 drwxr-xr-x. 2 root root 4096 3月 17 17:08 bt drwx--x--x 4 root root 4096 8月 19 19:23 containerd drwxr-xr-x. 2 root root 4096 2月 23 2025 mysql drwxr-xr-x 6 prometheus prometheus 4096 8月 17 21:55 prometheus drwxr-xr-x. 2 root root 4096 10月 31 2018 rh -rw-------. 1 root root 53949998 7月 14 2023 VMwareTools-10.3.26-22085142.tar.gz drwxr-xr-x. 8 root root 4096 7月 14 2023 vmware-tools-distrib drwxr-xr-x. 2 root root 4096 3月 17 14:11 webmin [root@localhost opt]# mkdir /data mkdir: 无法创建目录"/data": 文件已存在 [root@localhost opt]# ls -l 总用量 178812 -rw-r--r--. 1 root root 129115976 3月 13 15:23 boot.bak0.bz2 drwxr-xr-x. 2 root root 4096 3月 17 17:08 bt drwx--x--x 4 root root 4096 8月 19 19:23 containerd drwxr-xr-x. 2 root root 4096 2月 23 2025 mysql drwxr-xr-x 6 prometheus prometheus 4096 8月 17 21:55 prometheus drwxr-xr-x. 2 root root 4096 10月 31 2018 rh -rw-------. 1 root root 53949998 7月 14 2023 VMwareTools-10.3.26-22085142.tar.gz drwxr-xr-x. 8 root root 4096 7月 14 2023 vmware-tools-distrib drwxr-xr-x. 2 root root 4096 3月 17 14:11 webmin [root@localhost opt]# cd /data [root@localhost data]# git clone https://gitee.com/inge365/docker-prometheus.git fatal: 目标路径 &#39;docker-prometheus&#39; 已经存在,并且不是一个空目录。 [root@localhost data]# cd /docker-prometheus bash: cd: /docker-prometheus: 没有那个文件或目录 [root@localhost data]# cd docker-prometheus/ [root@localhost docker-prometheus]# docker-compose up -d Pulling alertmanager (prom/alertmanager:v0.25.0)... v0.25.0: Pulling from prom/alertmanager b08a0a826235: Pull complete d71d159599c3: Pull complete 05d21abf0535: Pull complete c4dc43cc8685: Pull complete aff850a11e31: Pull complete 6c477a8cc220: Pull complete Digest: sha256:fd4d9a3dd1fd0125108417be21be917f19cc76262347086509a0d43f29b80e98 Status: Downloaded newer image for prom/alertmanager:v0.25.0 Pulling cadvisor (google/cadvisor:latest)... latest: Pulling from google/cadvisor ff3a5c916c92: Pull complete 44a45bb65cdf: Pull complete 0bbe1a2fe2a6: Pull complete Digest: sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 Status: Downloaded newer image for google/cadvisor:latest Pulling node_exporter (prom/node-exporter:v1.5.0)... v1.5.0: Pulling from prom/node-exporter 22b70bddd3ac: Pull complete 5c12815fee55: Pull complete c0e87333d380: Pull complete Digest: sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Status: Downloaded newer image for prom/node-exporter:v1.5.0 Pulling prometheus (prom/prometheus:v2.37.6)... v2.37.6: Pulling from prom/prometheus 4399114b4c59: Pull complete 225de5a6f1e7: Pull complete d4fec713b49e: Pull complete 7ae184732db2: Pull complete fee9b37b7eaa: Pull complete 7bc64fbe5ac4: Pull complete c5808d9b102a: Pull complete 25611bd629bf: Pull complete e30138ae4e40: Pull complete f68b4ae50d77: Pull complete a8143b4a94e9: Pull complete 72c09123b9ad: Pull complete Digest: sha256:92ceb93400dd4c887c76685d258bd75b9dcfe3419b71932821e9dcc70288d851 Status: Downloaded newer image for prom/prometheus:v2.37.6 Pulling grafana (grafana/grafana:9.4.3)... 9.4.3: Pulling from grafana/grafana 895e193edb51: Pull complete a3e3778621b5: Pull complete e7cf2c69b927: Pull complete df40c119df08: Pull complete 3b29ea6a27af: Pull complete 3997cd619520: Pull complete 7e759f975aac: Pull complete ff133072f235: Pull complete f9a56094a361: Pull complete Digest: sha256:76dcf36e7d2a4110c2387c1ad6e4641068dc78d7780da516d5d666d1e4623ac5 Status: Downloaded newer image for grafana/grafana:9.4.3 Creating node-exporter ... Creating node-exporter ... error Creating alertmanager ... WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (ff6e14ace4a34f23421c16fb36497c92f47b4f6e68d3828dcb78f425f136bcec): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use Creating alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on e Creating cadvisor ... done proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (ff6e14ace4a34f23421c16fb36497c92f47b4f6e68d3828dcb78f425f136bcec): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (4c3949c57c7cd56926e99a458f84213040813ef5e50b04931f8e017814b69e6e): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# systemctl daemon-reload [root@localhost docker-prometheus]# [root@localhost docker-prometheus]# systemctl restart docker [root@localhost docker-prometheus]# systemctl stop firewalld [root@localhost docker-prometheus]# docker-compose up -d Starting node-exporter ... cadvisor is up-to-date Starting alertmanager ... Starting alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (655a075b63ca30d8feb55e2af3a0d90588987435d9c9e32c6d9ee74cd6da8bd2): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 9093 -j DNAT --to-destination 172.18.0.3:9093 ! -i br-a6445a378290: iptables: No chain/target/match by that name. Starting node-exporter ... error WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (98bbc9308f4f561c30990777d9c07d253d0b3637ab40c49b9d3e5dd65e7ff2b3): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 9100 -j DNAT --to-destination 172.18.0.4:9100 ! -i br-a6445a378290: iptables: No chain/target/match by that name. (exit status 1)) ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (655a075b63ca30d8feb55e2af3a0d90588987435d9c9e32c6d9ee74cd6da8bd2): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 9093 -j DNAT --to-destination 172.18.0.3:9093 ! -i br-a6445a378290: iptables: No chain/target/match by that name. (exit status 1)) ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (98bbc9308f4f561c30990777d9c07d253d0b3637ab40c49b9d3e5dd65e7ff2b3): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 9100 -j DNAT --to-destination 172.18.0.4:9100 ! -i br-a6445a378290: iptables: No chain/target/match by that name. (exit status 1)) ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# sudo systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/docker.service.d └─timeout.conf Active: active (running) since 三 2025-08-27 15:33:16 CST; 9s ago Docs: https://docs.docker.com Main PID: 12886 (dockerd) Tasks: 50 Memory: 33.9M CGroup: /system.slice/docker.service ├─12886 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ├─13070 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 6379 -container-ip 172.19.0.2... ├─13078 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 6379 -container-ip 172.19.0.2 -con... ├─13119 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.19.0.3 -... └─13127 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 80 -container-ip 172.19.0.3 -conta... 8月 27 15:33:14 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:14.743661358+08:00" level=in...rpc 8月 27 15:33:14 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:14.743680704+08:00" level=in...rpc 8月 27 15:33:14 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:14.763313066+08:00" level=in...y2" 8月 27 15:33:14 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:14.788019065+08:00" level=in...t." 8月 27 15:33:15 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:15.126541080+08:00" level=in...ss" 8月 27 15:33:16 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:16.049410152+08:00" level=in...e." 8月 27 15:33:16 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:16.087290259+08:00" level=in....24 8月 27 15:33:16 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:16.087475600+08:00" level=in...on" 8月 27 15:33:16 localhost.localdomain systemd[1]: Started Docker Application Container Engine. 8月 27 15:33:16 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:16.115125408+08:00" level=in...ck" Hint: Some lines were ellipsized, use -l to show in full. [root@localhost docker-prometheus]# sudo iptables -t nat -F [root@localhost docker-prometheus]# sudo iptables -t filter -F [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3daac002ffaa google/cadvisor:latest "/usr/bin/cadvisor -…" 4 minutes ago Up 16 seconds 8080/tcp cadvisor fd2c63d29ec1 nginx:alpine "/docker-entrypoint.…" 19 minutes ago Up 16 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp yywz_nginx_1 cd603ef0e887 redis:alpine "docker-entrypoint.s…" 19 minutes ago Up 16 seconds 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp yywz_redis_1 [root@localhost docker-prometheus]# docker rm fd2c63d29ec1 Error response from daemon: You cannot remove a running container fd2c63d29ec116234c94487a17d6ea75d784d0cfd22b7e0d467cbab518258347. Stop the container before attempting removal or force remove [root@localhost docker-prometheus]# docker stop fd2c63d29ec1 fd2c63d29ec1 [root@localhost docker-prometheus]# docker rm fd2c63d29ec1 fd2c63d29ec1 [root@localhost docker-prometheus]# docker stop cd603ef0e887 cd603ef0e887 [root@localhost docker-prometheus]# docker rm cd603ef0e887 cd603ef0e887 [root@localhost docker-prometheus]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3daac002ffaa google/cadvisor:latest "/usr/bin/cadvisor -…" 6 minutes ago Up 2 minutes 8080/tcp cadvisor [root@localhost docker-prometheus]# docker stop 3daac002ffaa 3daac002ffaa [root@localhost docker-prometheus]# docker rm 3daac002ffaa 3daac002ffaa [root@localhost docker-prometheus]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@localhost docker-prometheus]# docker-compose up -d Starting node-exporter ... Starting alertmanager ... Starting alertmanager ... error WARNING: Host is already in use by another container ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on e Starting node-exporter ... error proxy: listen tcp4 0.0.0.0:9093: bind: address already in use WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (d918c84a550f7b946d78470572034217e4fb96b010e7e4af0b0999844abd017f): Error starting userl Creating cadvisor ... done ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (a307f195dd99a59726f03ce9fd5b6ae8b2ae07e0551548c3247604349183d622): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (d918c84a550f7b946d78470572034217e4fb96b010e7e4af0b0999844abd017f): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# docker-compose up -d cadvisor is up-to-date Starting node-exporter ... Starting node-exporter ... error WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (ae6db99f5f30989777bab25b89dbd620f2056a872aa85661bb8aad45032d5302): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use Starting alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (17965ccc2adb43b6158e5ddd99dd13ac17c869625c0daed3e6099ba7177f8650): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (ae6db99f5f30989777bab25b89dbd620f2056a872aa85661bb8aad45032d5302): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (17965ccc2adb43b6158e5ddd99dd13ac17c869625c0daed3e6099ba7177f8650): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# docker system prune -f Deleted Containers: 7f52ab012fe533dab192615c8d057fbc3c9305774241bdf3c49d226b858d6523 8b45998c395e05165e680ee482791a34a6287da38f18afe9c748d60e0600c45a Deleted Networks: yywz_default Total reclaimed space: 0B [root@localhost docker-prometheus]# docker-compose down Stopping cadvisor ... done Removing cadvisor ... done Removing network docker-prometheus_monitoring [root@localhost docker-prometheus]# docker-compose up -d Creating network "docker-prometheus_monitoring" with driver "bridge" Creating node-exporter ... Creating node-exporter ... error Creating cadvisor ... WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (f0abf54c178140c02db6497198b8cdd574c77323fdce7217292efd2df1b09080): Error starting userl Creating alertmanager ... error WARNING: Host is already in use by another container ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (b86c6f893b30deb2e189aba9a1a9ca972401f43bba55d8e4eb8796780e658bc1): Error starting userland Creating cadvisor ... done ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (f0abf54c178140c02db6497198b8cdd574c77323fdce7217292efd2df1b09080): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (b86c6f893b30deb2e189aba9a1a9ca972401f43bba55d8e4eb8796780e658bc1): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# docker-compose logs alertmanager Attaching to alertmanager [root@localhost docker-prometheus]# docker-compose logs node-exporter ERROR: No such service: node-exporter [root@localhost docker-prometheus]# systemctl daemon-reload [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# sudo systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/docker.service.d └─timeout.conf Active: active (running) since 三 2025-08-27 15:40:36 CST; 10s ago Docs: https://docs.docker.com Main PID: 16222 (dockerd) Tasks: 14 Memory: 27.6M CGroup: /system.slice/docker.service └─16222 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.582025207+08:00" level=in...rpc 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.582038622+08:00" level=in...rpc 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.599443534+08:00" level=in...y2" 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.611122282+08:00" level=in...t." 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.853104234+08:00" level=in...ss" 8月 27 15:40:36 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:36.461289460+08:00" level=in...e." 8月 27 15:40:36 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:36.495281733+08:00" level=in....24 8月 27 15:40:36 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:36.495461424+08:00" level=in...on" 8月 27 15:40:36 localhost.localdomain systemd[1]: Started Docker Application Container Engine. 8月 27 15:40:36 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:36.527032657+08:00" level=in...ck" Hint: Some lines were ellipsized, use -l to show in full. [root@localhost docker-prometheus]# docker-compose up -d Starting alertmanager ... Starting node-exporter ... Starting alertmanager ... error WARNING: Host is already in use by another container ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (8417fe6ebd4c207f2db52a925bdd7f5924c9d48f1f15d67907df06996b445fdf): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use Starting node-exporter ... error ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (8641cb448c49665ec95b7dcbdc39c78cf0f47f54ad56a6da4fac5898c1778489): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (8417fe6ebd4c207f2db52a925bdd7f5924c9d48f1f15d67907df06996b445fdf): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (8641cb448c49665ec95b7dcbdc39c78cf0f47f54ad56a6da4fac5898c1778489): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# vim docker-compose.yml [root@localhost docker-prometheus]# sudo lsof -i :9093 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME alertmana 1161 prometheus 7u IPv6 31306 0t0 TCP *:copycat (LISTEN) [root@localhost docker-prometheus]# docker ps --format "table {{.Names}}\t{{.Ports}}" NAMES PORTS cadvisor 8080/tcp [root@localhost docker-prometheus]# docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" | grep -E "(9093|9100)" [root@localhost docker-prometheus]# docker-compose up -d Starting alertmanager ... cadvisor is up-to-date Starting alertmanager ... error WARNING: Host is already in use by another container ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (c9acbc4520a2f9ab5b40b5936c3b61071c8acb445dbd858c2b126d3dcf9e101f): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use Starting node-exporter ... error ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (f46135cd7dc0aa34069a7b825c8ad09a691c6a40f429f7714a2f052f14d348d6): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (c9acbc4520a2f9ab5b40b5936c3b61071c8acb445dbd858c2b126d3dcf9e101f): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (f46135cd7dc0aa34069a7b825c8ad09a691c6a40f429f7714a2f052f14d348d6): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo lsof -ti:9093 1161 [root@localhost docker-prometheus]# sudo lsof -ti:9100 714 1167 [root@localhost docker-prometheus]# udo kill -9 714 bash: udo: 未找到命令... [root@localhost docker-prometheus]# udo kill -9 <714> bash: 未预期的符号 `714&#39; 附近有语法错误 [root@localhost docker-prometheus]# sudo kill -9 <1167> bash: 未预期的符号 `1167&#39; 附近有语法错误 [root@localhost docker-prometheus]# sudo kill -9 1167 [root@localhost docker-prometheus]# sudo kill -9 1161 [root@localhost docker-prometheus]# sudo kill -9 714 [root@localhost docker-prometheus]# docker-compose up -d Starting node-exporter ... Starting alertmanager ... Starting node-exporter ... error WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (a1e70481732ff4211fb0830d456aba632427d7edbf1aed89182b54a2dc1ac0af): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use Starting alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (d7a7bb69496f07386701cc39d4f9da755beb85a528753132c9a70048af1c917c): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (a1e70481732ff4211fb0830d456aba632427d7edbf1aed89182b54a2dc1ac0af): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (d7a7bb69496f07386701cc39d4f9da755beb85a528753132c9a70048af1c917c): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# docker-compose up -d Starting node-exporter ... Starting alertmanager ... Starting node-exporter ... error WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (d4bd8914814c09ec94f19902075b8a0b7c2f39feba0a2efc1c6018e52fb01061): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use Starting alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (a1f8e19b7f96a87e287d4d2b33ce00e81a5d5c0d1bc2192c17f9bf39ace81f16): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (d4bd8914814c09ec94f19902075b8a0b7c2f39feba0a2efc1c6018e52fb01061): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (a1f8e19b7f96a87e287d4d2b33ce00e81a5d5c0d1bc2192c17f9bf39ace81f16): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# cd /opt [root@localhost opt]# cd /data [root@localhost data]# ks bash: ks: 未找到命令... [root@localhost data]# ls docker-prometheus [root@localhost data]# ls -l 总用量 4 drwxr-xr-x 6 root root 4096 8月 27 15:42 docker-prometheus [root@localhost data]# cd /opt [root@localhost opt]# docker-compose.yml bash: docker-compose.yml: 未找到命令... [root@localhost opt]# cd docker-compose.yml bash: cd: docker-compose.yml: 没有那个文件或目录 [root@localhost opt]# find docker-compose.yml find: ‘docker-compose.yml’: 没有那个文件或目录 [root@localhost opt]# cd /data/docker-prometheus/ [root@localhost docker-prometheus]# ls l ls: 无法访问l: 没有那个文件或目录 [root@localhost docker-prometheus]# ls -l 总用量 52 drwxr-xr-x 2 root root 4096 8月 20 15:12 alertmanager -rw-r--r-- 1 root root 2634 8月 20 14:36 docker-compose.yaml drwxr-xr-x 2 root root 4096 8月 20 14:36 grafana -rw-r--r-- 1 root root 35181 8月 20 14:36 LICENSE drwxr-xr-x 2 root root 4096 8月 22 16:45 prometheus -rw-r--r-- 1 root root 0 8月 20 14:36 README.md [root@localhost docker-prometheus]# vim docker-compose.yaml [root@localhost docker-prometheus]# docker-compose.yml bash: docker-compose.yml: 未找到命令... [root@localhost docker-prometheus]# docker-compose.yml bash: docker-compose.yml: 未找到命令... [root@localhost docker-prometheus]# sudo systemctl restart docker ^[[A[root@localhost docker-prometheudocker-compose up -d Recreating node-exporter ... Recreating alertmanager ... cadvisor is up-to-date Recreating alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on e Recreating node-exporter ... done proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (845ba9f38bf09748f96a5e67761382648b7a08f0d1ca4aea45e9d486034ee09f): Error starting userland proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# docker-compose up -d Removing alertmanager node-exporter is up-to-date Recreating 53b2433d3f44_alertmanager ... cadvisor is up-to-date Recreating 53b2433d3f44_alertmanager ... error ERROR: for 53b2433d3f44_alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (9f74c9d6ee9219aa21f3a075ee643a8da9a3ee0a7cad5d8fc5e7497c5784c400): Error starting userland proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (9f74c9d6ee9219aa21f3a075ee643a8da9a3ee0a7cad5d8fc5e7497c5784c400): Error starting userland proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo lsof -i :9093 || echo "端口 9093 已释放" COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME alertmana 17952 prometheus 7u IPv6 177546 0t0 TCP *:copycat (LISTEN) [root@localhost docker-prometheus]# sudo lsof -i :9100 || echo "端口 9100 已释放" COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME prometheu 17933 prometheus 30u IPv6 179703 0t0 TCP localhost:54442->localhost:jetdirect (ESTABLISHED) node_expo 17972 prometheus 3u IPv6 177682 0t0 TCP *:jetdirect (LISTEN) node_expo 17972 prometheus 6u IPv6 177755 0t0 TCP localhost:jetdirect->localhost:54442 (ESTABLISHED) [root@localhost docker-prometheus]# sudo iptables -t nat -L -n | grep -E "(9093|9100)" MASQUERADE tcp -- 172.18.0.3 172.18.0.3 tcp dpt:9100 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:9101 to:172.18.0.3:9100 [root@localhost docker-prometheus]# docker-compose down Stopping node-exporter ... done Stopping cadvisor ... done Removing alertmanager ... done Removing node-exporter ... done Removing cadvisor ... done Removing 53b2433d3f44_alertmanager ... done Removing network docker-prometheus_monitoring [root@localhost docker-prometheus]# docker-compose up -d Creating network "docker-prometheus_monitoring" with driver "bridge" Creating node-exporter ... Creating cadvisor ... Creating alertmanager ... Creating alertmanager ... error Creating node-exporter ... done Creating cadvisor ... done proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (fc63ea1fc78609d94171482fa5d1a2fb2c3ba3ed83c2d8088806b8a5613cb2ac): Error starting userland proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" | grep 9094 [root@localhost docker-prometheus]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ede1cff2fba0 google/cadvisor:latest "/usr/bin/cadvisor -…" 56 seconds ago Up 55 seconds 8080/tcp cadvisor 1a149cda0ce5 prom/node-exporter:v1.5.0 "/bin/node_exporter …" 56 seconds ago Up 55 seconds 0.0.0.0:9101->9100/tcp, :::9101->9100/tcp node-exporter [root@localhost docker-prometheus]# vim docker-compose.yaml [root@localhost docker-prometheus]# docker-compose up -d Recreating alertmanager ... node-exporter is up-to-date Recreating alertmanager ... done Creating prometheus ... Creating prometheus ... error ERROR: for prometheus Cannot start service prometheus: driver failed programming external connectivity on endpoint prometheus (ddd41fce73c70167f23ff37ff3001e101134785f9423893a5aae147768420d6a): Error starting userland proxy: listen tcp4 0.0.0.0:9090: bind: address already in use ERROR: for prometheus Cannot start service prometheus: driver failed programming external connectivity on endpoint prometheus (ddd41fce73c70167f23ff37ff3001e101134785f9423893a5aae147768420d6a): Error starting userland proxy: listen tcp4 0.0.0.0:9090: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# vim docker-compose.yaml version: &#39;3.3&#39; volumes: prometheus_data: {} grafana_data: {} networks: monitoring: driver: bridge services: prometheus: image: prom/prometheus:v2.37.6 container_name: prometheus restart: always volumes: - /etc/localtime:/etc/localtime:ro - ./prometheus/:/etc/prometheus/ - prometheus_data:/prometheus command: - &#39;--config.file=/etc/prometheus/prometheus.yml&#39; - &#39;--storage.tsdb.path=/prometheus&#39; - &#39;--web.console.libraries=/usr/share/prometheus/console_libraries&#39; - &#39;--web.console.templates=/usr/share/prometheus/consoles&#39; #热加载配置 - &#39;--web.enable-lifecycle&#39; #api配置 #- &#39;--web.enable-admin-api&#39; #历史数据最大保留时间,默认15天 - &#39;--storage.tsdb.retention.time=30d&#39; networks: - monitoring links: - alertmanager 34,6 顶端 怎么该
08-28
wsl: 检测到 localhost 代理配置,但未镜像到 WSL。NAT 模式下的 WSL 不支持 localhost 代理。 (base) os01@DESKTOP-F3AFPI2:~$ sudo apt update && sudo apt upgrade -y sudo apt install -y build-essential git wget curl vim tmux htop -bash: syntax error near unexpected token `;&&#39; [sudo] password for os01: Reading package lists... Done Building dependency tree... Done Reading state information... Done build-essential is already the newest version (12.10ubuntu1). wget is already the newest version (1.21.4-1ubuntu4.1). wget set to manually installed. curl is already the newest version (8.5.0-2ubuntu10.6). curl set to manually installed. tmux is already the newest version (3.4-1ubuntu0.1). tmux set to manually installed. The following packages were automatically installed and are no longer required: libdrm-nouveau2 libdrm-radeon1 libgl1-amber-dri libglapi-mesa libxcb-dri2-0 Use &#39;sudo apt autoremove&#39; to remove them. The following additional packages will be installed: git-man libnl-genl-3-200 vim-common vim-runtime vim-tiny xxd Suggested packages: git-daemon-run | git-daemon-sysvinit git-doc git-email git-gui gitk gitweb git-cvs git-mediawiki git-svn lm-sensors strace ctags vim-doc vim-scripts indent The following NEW packages will be installed: htop libnl-genl-3-200 The following packages will be upgraded: git git-man vim vim-common vim-runtime vim-tiny xxd 7 upgraded, 2 newly installed, 0 to remove and 168 not upgraded. Need to get 15.2 MB/15.4 MB of archives. After this operation, 496 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 vim amd64 2:9.1.0016-1ubuntu7.9 [1881 kB] Get:2 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 vim-common all 2:9.1.0016-1ubuntu7.9 [386 kB] Get:3 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 vim-tiny amd64 2:9.1.0016-1ubuntu7.9 [803 kB] Get:4 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 vim-runtime all 2:9.1.0016-1ubuntu7.9 [7281 kB] Get:5 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 xxd amd64 2:9.1.0016-1ubuntu7.9 [63.8 kB] Get:6 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 git-man all 1:2.43.0-1ubuntu7.3 [1100 kB] Get:7 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 git amd64 1:2.43.0-1ubuntu7.3 [3680 kB] Get:8 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 libnl-genl-3-200 amd64 3.7.0-0.3build1.1 [12.2 kB] Fetched 15.2 MB in 7s (2121 kB/s) (Reading database ... 55810 files and directories currently installed.) Preparing to unpack .../0-vim_2%3a9.1.0016-1ubuntu7.9_amd64.deb ... Unpacking vim (2:9.1.0016-1ubuntu7.9) over (2:9.1.0016-1ubuntu7.5) ... Preparing to unpack .../1-vim-common_2%3a9.1.0016-1ubuntu7.9_all.deb ... Unpacking vim-common (2:9.1.0016-1ubuntu7.9) over (2:9.1.0016-1ubuntu7.5) ... Preparing to unpack .../2-vim-tiny_2%3a9.1.0016-1ubuntu7.9_amd64.deb ... Unpacking vim-tiny (2:9.1.0016-1ubuntu7.9) over (2:9.1.0016-1ubuntu7.5) ... Preparing to unpack .../3-vim-runtime_2%3a9.1.0016-1ubuntu7.9_all.deb ... Unpacking vim-runtime (2:9.1.0016-1ubuntu7.9) over (2:9.1.0016-1ubuntu7.5) ... Preparing to unpack .../4-xxd_2%3a9.1.0016-1ubuntu7.9_amd64.deb ... Unpacking xxd (2:9.1.0016-1ubuntu7.9) over (2:9.1.0016-1ubuntu7.5) ... Preparing to unpack .../5-git-man_1%3a2.43.0-1ubuntu7.3_all.deb ... Unpacking git-man (1:2.43.0-1ubuntu7.3) over (1:2.43.0-1ubuntu7.1) ... Preparing to unpack .../6-git_1%3a2.43.0-1ubuntu7.3_amd64.deb ... Unpacking git (1:2.43.0-1ubuntu7.3) over (1:2.43.0-1ubuntu7.1) ... Selecting previously unselected package libnl-genl-3-200:amd64. Preparing to unpack .../7-libnl-genl-3-200_3.7.0-0.3build1.1_amd64.deb ... Unpacking libnl-genl-3-200:amd64 (3.7.0-0.3build1.1) ... Selecting previously unselected package htop. Preparing to unpack .../8-htop_3.3.0-4build1_amd64.deb ... Unpacking htop (3.3.0-4build1) ... Setting up xxd (2:9.1.0016-1ubuntu7.9) ... Setting up vim-common (2:9.1.0016-1ubuntu7.9) ... Setting up libnl-genl-3-200:amd64 (3.7.0-0.3build1.1) ... Setting up git-man (1:2.43.0-1ubuntu7.3) ... Setting up vim-runtime (2:9.1.0016-1ubuntu7.9) ... Setting up vim (2:9.1.0016-1ubuntu7.9) ... Setting up htop (3.3.0-4build1) ... Setting up vim-tiny (2:9.1.0016-1ubuntu7.9) ... Setting up git (1:2.43.0-1ubuntu7.3) ... Processing triggers for libc-bin (2.39-0ubuntu8.6) ... Processing triggers for man-db (2.12.0-4build2) ... Processing triggers for hicolor-icon-theme (0.17-2) ... (base) os01@DESKTOP-F3AFPI2:~$ # 安装NVIDIA容器工具包 distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list sudo apt update && sudo apt install -y nvidia-docker2 # 验证CUDA可用性 nvidia-smi # 应显示与Windows主机相同的GPU信息 Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). gpg: no valid OpenPGP data found. # Unsupported distribution! # Check https://nvidia.github.io/nvidia-docker -bash: syntax error near unexpected token `;&&#39; Sat Oct 11 15:45:43 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.82.10 Driver Version: 581.29 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 5090 On | 00000000:01:00.0 On | N/A | | 0% 43C P0 73W / 600W | 1244MiB / 32607MiB | 2% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ (base) os01@DESKTOP-F3AFPI2:~$ git clone https://gitcode.com/GitHub_Trending/op/Open-Sora-Plan.git cd Open-Sora-Plan git checkout mindspeed_mmdit # 切换到最新NPU支持分支 fatal: destination path &#39;Open-Sora-Plan&#39; already exists and is not an empty directory. Already on &#39;mindspeed_mmdit&#39; Your branch is up to date with &#39;origin/mindspeed_mmdit&#39;. (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ # 使用conda管理环境(推荐) wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/Miniconda3-py38_23.11.0-2-Linux-x86_64.sh bash Miniconda3-py38_23.11.0-2-Linux-x86_64.sh -b -p $HOME/miniconda3 source $HOME/miniconda3/bin/activate conda env create -f environment.yml conda activate open-sora # 验证Python版本和CUDA支持 python -c "import torch; print(&#39;CUDA可用:&#39;, torch.cuda.is_available())" # 应返回True --2025-10-11 15:46:07-- https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/Miniconda3-py38_23.11.0-2-Linux-x86_64.sh Resolving mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)... 101.6.15.130, 2402:f000:1:400::2 Connecting to mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)|101.6.15.130|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 131844786 (126M) [application/octet-stream] Saving to: ‘Miniconda3-py38_23.11.0-2-Linux-x86_64.sh.1’ Miniconda3-py38_23.11.0-2-Lin 100%[=================================================>] 125.74M 39.6MB/s in 3.3s 2025-10-11 15:46:11 (37.8 MB/s) - ‘Miniconda3-py38_23.11.0-2-Linux-x86_64.sh.1’ saved [131844786/131844786] ERROR: File or directory already exists: &#39;/home/os01/miniconda3&#39; If you want to update an existing installation, use the -u option. EnvironmentFileNotFound: &#39;/home/os01/Open-Sora-Plan/environment.yml&#39; file not found EnvironmentNameNotFound: Could not find conda environment: open-sora You can list all discoverable environments with `conda info --envs`. CUDA可用: True (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ # 配置pip国内源 pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple # 手动安装特定版本依赖 pip install torch==2.1.0+cu118 torchvision==0.16.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118 pip install xformers==0.0.22.post7 accelerate==0.34.0 deepspeed==0.12.6 Writing to /home/os01/.config/pip/pip.conf Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple, https://download.pytorch.org/whl/cu118 Requirement already satisfied: torch==2.1.0+cu118 in /home/os01/miniconda3/lib/python3.8/site-packages (2.1.0+cu118) Requirement already satisfied: torchvision==0.16.0+cu118 in /home/os01/miniconda3/lib/python3.8/site-packages (0.16.0+cu118) Requirement already satisfied: filelock in /home/os01/miniconda3/lib/python3.8/site-packages (from torch==2.1.0+cu118) (3.16.1) Requirement already satisfied: typing-extensions in /home/os01/miniconda3/lib/python3.8/site-packages (from torch==2.1.0+cu118) (4.13.2) Requirement already satisfied: sympy in /home/os01/miniconda3/lib/python3.8/site-packages (from torch==2.1.0+cu118) (1.13.3) Requirement already satisfied: networkx in /home/os01/miniconda3/lib/python3.8/site-packages (from torch==2.1.0+cu118) (3.1) Requirement already satisfied: jinja2 in /home/os01/miniconda3/lib/python3.8/site-packages (from torch==2.1.0+cu118) (3.1.6) Requirement already satisfied: fsspec in /home/os01/miniconda3/lib/python3.8/site-packages (from torch==2.1.0+cu118) (2025.3.0) Requirement already satisfied: triton==2.1.0 in /home/os01/miniconda3/lib/python3.8/site-packages (from torch==2.1.0+cu118) (2.1.0) Requirement already satisfied: numpy in /home/os01/miniconda3/lib/python3.8/site-packages (from torchvision==0.16.0+cu118) (1.24.4) Requirement already satisfied: requests in /home/os01/miniconda3/lib/python3.8/site-packages (from torchvision==0.16.0+cu118) (2.31.0) Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /home/os01/miniconda3/lib/python3.8/site-packages (from torchvision==0.16.0+cu118) (10.4.0) Requirement already satisfied: MarkupSafe>=2.0 in /home/os01/miniconda3/lib/python3.8/site-packages (from jinja2->torch==2.1.0+cu118) (2.1.5) Requirement already satisfied: charset-normalizer<4,>=2 in /home/os01/miniconda3/lib/python3.8/site-packages (from requests->torchvision==0.16.0+cu118) (2.0.4) Requirement already satisfied: idna<4,>=2.5 in /home/os01/miniconda3/lib/python3.8/site-packages (from requests->torchvision==0.16.0+cu118) (3.4) Requirement already satisfied: urllib3<3,>=1.21.1 in /home/os01/miniconda3/lib/python3.8/site-packages (from requests->torchvision==0.16.0+cu118) (1.26.18) Requirement already satisfied: certifi>=2017.4.17 in /home/os01/miniconda3/lib/python3.8/site-packages (from requests->torchvision==0.16.0+cu118) (2023.11.17) Requirement already satisfied: mpmath<1.4,>=1.1.0 in /home/os01/miniconda3/lib/python3.8/site-packages (from sympy->torch==2.1.0+cu118) (1.3.0) Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting xformers==0.0.22.post7 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/56/5f/20481c8ccfbd2ac0f936908c9b0ff3e31380d8d186d7dabf34a941b3127f/xformers-0.0.22.post7-cp38-cp38-manylinux2014_x86_64.whl (211.8 MB) Collecting accelerate==0.34.0 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/02/0e/626f2dd4325f4545fbaaf9c590390d2d4ab8e7551579346fe1e319bd93af/accelerate-0.34.0-py3-none-any.whl (324 kB) Collecting deepspeed==0.12.6 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/f1/ff/0fba0fec90e7de1c7148b0527e8ac9cdf2280d274ed135bcb2187f7497a7/deepspeed-0.12.6.tar.gz (1.2 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [16 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-3za_tc1r/deepspeed_40a9c88a17284f49b38ee0bcb816fee1/setup.py", line 100, in <module> cuda_major_ver, cuda_minor_ver = installed_cuda_version() File "/tmp/pip-install-3za_tc1r/deepspeed_40a9c88a17284f49b38ee0bcb816fee1/op_builder/builder.py", line 52, in installed_cuda_version output = subprocess.check_output([cuda_home + "/bin/nvcc", "-V"], universal_newlines=True) File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 415, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 493, in run with Popen(*popenargs, **kwargs) as process: File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 858, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 1720, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: &#39;/usr/local/cuda/bin/nvcc&#39; [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ ls /usr/local/cuda*/bin/nvcc # 检查常见路径 ls: cannot access &#39;/usr/local/cuda*/bin/nvcc&#39;: No such file or directory (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ # 查看已安装驱动 nvidia-smi # 确认最高支持的CUDA版本 # 从官网下载对应版本安装 wget https://developer.download.nvidia.com/compute/cuda/12.2.2/local_installers/cuda_12.2.2_535.104.05_linux.run sudo sh cuda_12.2.2_535.104.05_linux.run Sat Oct 11 15:47:57 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.82.10 Driver Version: 581.29 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 5090 On | 00000000:01:00.0 On | N/A | | 0% 42C P5 33W / 600W | 1290MiB / 32607MiB | 1% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ --2025-10-11 15:47:57-- https://developer.download.nvidia.com/compute/cuda/12.2.2/local_installers/cuda_12.2.2_535.104.05_linux.run Resolving developer.download.nvidia.com (developer.download.nvidia.com)... 23.200.143.149, 23.200.143.133 Connecting to developer.download.nvidia.com (developer.download.nvidia.com)|23.200.143.149|:443... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: https://developer.download.nvidia.cn/compute/cuda/12.2.2/local_installers/cuda_12.2.2_535.104.05_linux.run [following] --2025-10-11 15:47:57-- https://developer.download.nvidia.cn/compute/cuda/12.2.2/local_installers/cuda_12.2.2_535.104.05_linux.run Resolving developer.download.nvidia.cn (developer.download.nvidia.cn)... 39.173.184.184, 39.173.184.185, 39.173.184.186, ... Connecting to developer.download.nvidia.cn (developer.download.nvidia.cn)|39.173.184.184|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 4344134690 (4.0G) [application/octet-stream] Saving to: ‘cuda_12.2.2_535.104.05_linux.run’ cuda_12.2.2_535.104.05_linux. 100%[=================================================>] 4.04G 43.0MB/s in 97s 2025-10-11 15:49:35 (42.5 MB/s) - ‘cuda_12.2.2_535.104.05_linux.run’ saved [4344134690/4344134690] sh: 1: dkms: not found Installation failed. See log at /var/log/cuda-installer.log for details. (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ nano ~/.bashrc (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ source ~/.bashrc (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ echo $CUDA_HOME # 应显示正确路径 nvcc --version # 应显示版本信息 /usr/local/cuda-11.8 nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Fri_Jan__6_16:45:21_PST_2023 Cuda compilation tools, release 12.0, V12.0.140 Build cuda_12.0.r12.0/compiler.32267302_0 (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ CUDA_HOME=/usr/local/cuda-11.8 pip install deepspeed==0.12.6 Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting deepspeed==0.12.6 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/f1/ff/0fba0fec90e7de1c7148b0527e8ac9cdf2280d274ed135bcb2187f7497a7/deepspeed-0.12.6.tar.gz (1.2 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [16 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-lu7o1st4/deepspeed_fb52507ecf3c4a568c73c1bf64c6c15a/setup.py", line 100, in <module> cuda_major_ver, cuda_minor_ver = installed_cuda_version() File "/tmp/pip-install-lu7o1st4/deepspeed_fb52507ecf3c4a568c73c1bf64c6c15a/op_builder/builder.py", line 52, in installed_cuda_version output = subprocess.check_output([cuda_home + "/bin/nvcc", "-V"], universal_newlines=True) File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 415, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 493, in run with Popen(*popenargs, **kwargs) as process: File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 858, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 1720, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: &#39;/usr/local/cuda-11.8/bin/nvcc&#39; [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ sudo ln -s /usr/local/cuda-11.8 /usr/local/cuda (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ CUDA_HOME=/usr/local/cuda-11.8 pip install deepspeed==0.12.6 Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting deepspeed==0.12.6 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/f1/ff/0fba0fec90e7de1c7148b0527e8ac9cdf2280d274ed135bcb2187f7497a7/deepspeed-0.12.6.tar.gz (1.2 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [16 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-2g8bc64o/deepspeed_72185470bf214f78966e52dbacfb643d/setup.py", line 100, in <module> cuda_major_ver, cuda_minor_ver = installed_cuda_version() File "/tmp/pip-install-2g8bc64o/deepspeed_72185470bf214f78966e52dbacfb643d/op_builder/builder.py", line 52, in installed_cuda_version output = subprocess.check_output([cuda_home + "/bin/nvcc", "-V"], universal_newlines=True) File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 415, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 493, in run with Popen(*popenargs, **kwargs) as process: File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 858, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 1720, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: &#39;/usr/local/cuda-11.8/bin/nvcc&#39; [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ sudo chmod -R 755 /usr/local/cuda-11.8/bin chmod: cannot access &#39;/usr/local/cuda-11.8/bin&#39;: No such file or directory (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ python -c "import deepspeed; print(deepspeed.__version__)" # 应输出 0.12.6 Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named &#39;deepspeed&#39; (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ sudo apt install build-essential # 确保编译工具链完整 Reading package lists... Done Building dependency tree... Done Reading state information... Done build-essential is already the newest version (12.10ubuntu1). The following packages were automatically installed and are no longer required: libdrm-nouveau2 libdrm-radeon1 libgl1-amber-dri libglapi-mesa libxcb-dri2-0 Use &#39;sudo apt autoremove&#39; to remove them. 0 upgraded, 0 newly installed, 0 to remove and 168 not upgraded. (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ source ~/.bashrc (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ echo $CUDA_HOME # 应显示正确路径 nvcc --version # 应显示版本信息 /usr/local/cuda-11.8 nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Fri_Jan__6_16:45:21_PST_2023 Cuda compilation tools, release 12.0, V12.0.140 Build cuda_12.0.r12.0/compiler.32267302_0 (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ sudo ln -s /usr/local/cuda-11.8 /usr/local/cuda ln: failed to create symbolic link &#39;/usr/local/cuda&#39;: File exists (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ CUDA_HOME=/usr/local/cuda-11.8 pip install deepspeed==0.12.6 Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting deepspeed==0.12.6 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/f1/ff/0fba0fec90e7de1c7148b0527e8ac9cdf2280d274ed135bcb2187f7497a7/deepspeed-0.12.6.tar.gz (1.2 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [16 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-opz3tmcr/deepspeed_cadb668f897d483785a78637c4e1c468/setup.py", line 100, in <module> cuda_major_ver, cuda_minor_ver = installed_cuda_version() File "/tmp/pip-install-opz3tmcr/deepspeed_cadb668f897d483785a78637c4e1c468/op_builder/builder.py", line 52, in installed_cuda_version output = subprocess.check_output([cuda_home + "/bin/nvcc", "-V"], universal_newlines=True) File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 415, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 493, in run with Popen(*popenargs, **kwargs) as process: File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 858, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 1720, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: &#39;/usr/local/cuda-11.8/bin/nvcc&#39; [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ ls -ld /usr/local/cuda lrwxrwxrwx 1 root root 20 Oct 11 15:54 /usr/local/cuda -> /usr/local/cuda-11.8 (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ # 删除旧链接 sudo rm /usr/local/cuda # 创建新链接(替换为你的实际路径) sudo ln -s /usr/local/cuda-11.8 /usr/local/cuda (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ # 删除旧链接 sudo rm /usr/local/cuda # 创建新链接(替换为你的实际路径) sudo ln -s /usr/local/cuda-11.8 /usr/local/cuda (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ ls -l /usr/local | grep cuda lrwxrwxrwx 1 root root 20 Oct 11 15:58 cuda -> /usr/local/cuda-11.8 drwxr-xr-x 6 root root 4096 Oct 11 15:51 cuda-12.2 (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ sudo ldconfig # 更新动态链接库缓存 source ~/.bashrc # 重新加载环境变量 (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ CUDA_HOME=/usr/local/cuda-11.8 pip install deepspeed==0.12.6 Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting deepspeed==0.12.6 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/f1/ff/0fba0fec90e7de1c7148b0527e8ac9cdf2280d274ed135bcb2187f7497a7/deepspeed-0.12.6.tar.gz (1.2 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [16 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-d0r06ic8/deepspeed_c32c366c992d4568813fd9a585158b8c/setup.py", line 100, in <module> cuda_major_ver, cuda_minor_ver = installed_cuda_version() File "/tmp/pip-install-d0r06ic8/deepspeed_c32c366c992d4568813fd9a585158b8c/op_builder/builder.py", line 52, in installed_cuda_version output = subprocess.check_output([cuda_home + "/bin/nvcc", "-V"], universal_newlines=True) File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 415, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 493, in run with Popen(*popenargs, **kwargs) as process: File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 858, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/home/os01/miniconda3/lib/python3.8/subprocess.py", line 1720, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: &#39;/usr/local/cuda-11.8/bin/nvcc&#39; [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$ sudo chmod -R 755 /usr/local/cuda-11.8/bin chmod: cannot access &#39;/usr/local/cuda-11.8/bin&#39;: No such file or directory (base) os01@DESKTOP-F3AFPI2:~/Open-Sora-Plan$
最新发布
10-12
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值