PIXI安装pytorch

部署运行你感兴趣的模型镜像

正在使用pixi安装深度学习环境,也算是摸索了一晚上。

[workspace]
channels = ["nvidia","pytorch","conda-forge"]
name = "test1"
platforms = ["win-64"]
version = "0.1.0"
channel-priority = "disabled"


[tasks]


[feature.test11.dependencies]
python = "3.10"
pytorch-cuda = { version = "12.4"}
pytorch ={ version = "2.5.1", channel="pytorch"}
torchvision = "0.20.1"
matplotlib = ">=3.10.6,<4"

[feature.test12.dependencies]
python = "3.10"

[feature.test12.pypi-dependencies]

torch = { version = "==2.7.1", index = "https://mirror.nju.edu.cn/pytorch/whl/cu126" }
torchvision =  { version = "==0.22.1" }
matplotlib = ">=3.10.6,<4"



[environments]
env12 = { features = ["test12"], solve-group = "group12" }
env13 = { features = ["test11"], solve-group = "group13" }

其中有两个环境,分别展示了pypi安装方法和conda安装方法。

有一个值得注意的点,

pytorch ={ version = "2.5.1", channel="pytorch"}

一开始没加channel,一直下的是forge的pytorch,纯cpu的,无法激活gpu

您可能感兴趣的与本文相关的镜像

PyTorch 2.5

PyTorch 2.5

PyTorch
Cuda

PyTorch 是一个开源的 Python 机器学习库,基于 Torch 库,底层由 C++ 实现,应用于人工智能领域,如计算机视觉和自然语言处理

安装了mamba-ssm 2.2.5,系统CUDA驱动为12.8,已安装cudatoolkit,version=12.9。执行mamba检测代码如下: ```python import torch from mamba_ssm import Mamba def test_mamba_basic(): # 检查CUDA是否可用 device = "cuda" if torch.cuda.is_available() else "cpu" print(f"使用设备: {device}") if device == "cpu": print("警告: 未检测到CUDA,将使用CPU运行(速度会很慢)") # 定义模型参数(最小化配置) batch_size = 1 seq_len = 10 # 序列长度 d_model = 64 # 模型维度 n_layers = 1 # 仅1层 # 创建Mamba模型 model = Mamba( d_model=d_model, n_layers=n_layers, vocab_size=None, # 不使用文本token嵌入 ).to(device) # 生成随机输入 (batch_size, seq_len, d_model) x = torch.randn(batch_size, seq_len, d_model).to(device) # 执行前向传播 try: with torch.no_grad(): # 禁用梯度计算加速测试 output = model(x) # 验证输出形状是否正确 assert output.shape == (batch_size, seq_len, d_model), "输出形状不正确" print("✅ Mamba前向传播成功!") print(f"输入形状: {x.shape}") print(f"输出形状: {output.shape}") except Exception as e: print(f"❌ 运行失败: {str(e)}") if __name__ == "__main__": test_mamba_basic() ``` 运行时报错: ``` /tmp/tmp6bkt1lgp/main.c:1:10: fatal error: cuda.h: No such file or directory 1 | #include "cuda.h" | ^~~~~~~~ compilation terminated. Traceback (most recent call last): File "/home/xzj_24/MGCN/test_mamba.py", line 2, in <module> from mamba_ssm import Mamba File "/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/mamba_ssm/__init__.py", line 3, in <module> from mamba_ssm.ops.selective_scan_interface import selective_scan_fn, mamba_inner_fn File "/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/mamba_ssm/ops/selective_scan_interface.py", line 18, in <module> from mamba_ssm.ops.triton.layer_norm import _layer_norm_fwd File "/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/mamba_ssm/ops/triton/layer_norm.py", line 179, in <module> def _layer_norm_fwd_1pass_kernel( File "/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 378, in decorator return Autotuner(fn, fn.arg_names, configs, key, reset_to_zero, restore_value, pre_hook=pre_hook, File "/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 130, in __init__ self.do_bench = driver.active.get_benchmarker() File "/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/triton/runtime/driver.py", line 23, in __getattr__ self._initialize_obj() File "/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/triton/runtime/driver.py", line 20, in _initialize_obj self._obj = self._init_fn() File "/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/triton/runtime/driver.py", line 9, in _create_driver return actives[0]() File "/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/triton/backends/nvidia/driver.py", line 536, in __init__ self.utils = CudaUtils() # TODO: make static File "/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/triton/backends/nvidia/driver.py", line 89, in __init__ mod = compile_module_from_src(Path(os.path.join(dirname, "driver.c")).read_text(), "cuda_utils") File "/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/triton/backends/nvidia/driver.py", line 66, in compile_module_from_src so = _build(name, src_path, tmpdir, library_dirs(), include_dir, libraries) File "/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/triton/runtime/build.py", line 58, in _build subprocess.check_call(cc_cmd, stdout=subprocess.DEVNULL) File "/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmp6bkt1lgp/main.c', '-O3', '-shared', '-fPIC', '-Wno-psabi', '-o', '/tmp/tmp6bkt1lgp/cuda_utils.cpython-310-x86_64-linux-gnu.so', '-lcuda', '-L/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/triton/backends/nvidia/lib', '-L/lib/x86_64-linux-gnu', '-I/home/xzj_24/MGCN/.pixi/envs/default/lib/python3.10/site-packages/triton/backends/nvidia/include', '-I/tmp/tmp6bkt1lgp', '-I/home/xzj_24/MGCN/.pixi/envs/default/include/python3.10', '-I/home/xzj_24/MGCN/.pixi/envs/default/targets/x86_64-linux/include']' returned non-zero exit status 1. ``` 已确认.zshrc中配置了包括但不限于CUDA, CUDA_HOME, LD_LIBRARY_PATH等环境变量,仍无法检测到。系统gcc version 为 11. 请给出排查思路与解决方案。
10-16
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值