1. 环境准备
(1)安装显卡驱动
禁用默认开源驱动nouveau:
sudo gedit /etc/modeprobe.d/blacklist.conf
在弹出的文本末尾加上两句后保存:
blacklist nouveau
options nouveau modeset=0
在终端里输入以下命令更新文件:
sudo update-initramfs -u
重启电脑后,打开终端输入下列命令:
lsmod | grep nouveau
没有输出就说明禁用nouveau成功了。
...........................one day..................................
通过运行以下命令来验证驱动程序是否已安装,该命令将打印有关 GPU 的详细信息:
验证:~$
nvidia-smi
(2)安装cuda,
......................one day..........................................
选择安装cudnn
(3)更新系统并安装依赖:
sudo apt update && sudo apt upgrade -y
sudo apt install docker.io docker-compose python3-pip git -y
(4)调整系统参数(防止内存映射不足):
sudo sysctl -w vm.max_map_count=262144
# 永久生效(需重启)
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
AI写代码
2.官网安装的程序下载总是出现问题,
(1)找到ollama官方安装的文档,使用手动安装方式; https://github.com/ollama/ollama/blob/main/docs/linux.md
根据系统的cpu和gpu选择对应的般本下载:
x86_64 CPU选择下载ollama-linux-amd64
x86_64 GPU选择下载ollama-linux-amd64-rocm
aarch64|arm64 CPU选择下载ollama-linux-arm64
我们的系统为带x86_64 GPU
curl -L https://ollama.com/download/ollama-linux-amd64-rocm.tgz -o ollama-linux-amd64-rocm.tgz
#解压缩:
sudo tar -C /usr -xzf ollama-linux-amd64-rocm.tgz
====经历.....下载不成功,手动安装不成功,不能启动....======
(2)按照这个安装方法下载速度非常快,并且可以自动根据系统是否有显卡选择对应稳定版本:
sudo snap install ollama
sudo snap install ollama
查看文件夹:
主目录/snap
计算机/snap
计算机/snap/bin
运行 ollama serve
浏览器打开:http://127.0.0.1:11434
运行成功 ,配置启动管理
vi /etc/systemd/system/ollama.service
下面加粗的是修改的。
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/snap/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment="PATH=$PATH"
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_PORT=11434"
Environment="OLLAMA_ORIGINS=*"
[Install]
WantedBy=default.target
修改后再查看配置
ExecStart=/snap/bin/ollama serve
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
运行命令下载\安装、运行大模型:
https://ollama.com/library/qwen3 官网下载大模型
ollama run ollama run deepseek-r1:1.5b
-======验证
# 重载配置
sudo systemctl daemon-reload
# 启动服务
sudo systemctl start ollama.service
# 查看服务状态
sudo systemctl status ollama.service
# 设置服务开机自启动
sudo systemctl enable ollama.service
在大模型问一个复杂的问题,给大模型指令让其运行。
然后运行命令nvidia-smi 查看gpu使用情况。可以查看GPU显存等使用情况,
0 N/A N/A 123747 C /snap/ollama/55/bin/ollama 1742MiB
Wed Jul 23 11:57:08 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.169 Driver Version: 570.169 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4060 Off | 00000000:01:00.0 On | N/A |
| 34% 48C P8 N/A / 115W | 2271MiB / 8188MiB | 6% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 3939 G /usr/lib/xorg/Xorg 255MiB |
| 0 N/A N/A 4108 G /usr/bin/gnome-shell 33MiB |
| 0 N/A N/A 4604 G fcitx-qimpanel 25MiB |
| 0 N/A N/A 13060 G .../6495/usr/lib/firefox/firefox 144MiB |
| 0 N/A N/A 123747 C /snap/ollama/55/bin/ollama 1742MiB
===================================
运行ollama
ollama serve&
访问:http://localhost:11434/
==========================