服务器原先有ollama,想要重装,遇到一系列问题
安装下载连接:https://github.com/ollama/ollama/blob/main/docs/linux.md
模型下载链接:https://ollama.com/library/deepseek-r1:1.5b
一、安装新的ollama
在root用户下操作
1.卸载已安装的ollama
# Remove the ollama service:
sudo systemctl stop ollama
sudo systemctl disable ollama
sudo rm /etc/systemd/system/ollama.service
# Remove the ollama binary from your bin directory (either /usr/local/bin, /usr/bin, or /bin):
sudo rm $(which ollama)
# Remove the downloaded models and Ollama service user and group:
sudo rm -r /usr/share/ollama
sudo userdel ollama
sudo groupdel ollama
# Remove installed libraries:
sudo rm -rf /usr/local/lib/ollama
# Rremove the old libraries
sudo rm -rf /usr/lib/ollama
2.安装
curl -fsSL https://ollama.com/install.sh | sh
该命令国内下载缓慢,采取手工安装方式
Download and extract the package:
curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
# 如果在服务器无法下载,直接在本地电脑浏览器打开 https://ollama.com/download/ollama-linux-amd64.tgz 下载文件,然后将文件移到服务器上
sudo tar -C /usr -xzf ollama-linux-amd64.tgz
# Start Ollama:
ollama serve
# 可以正常启动,但是显示日志,所以使用以下命令
# ollama serve &
# In another terminal, verify that Ollama is running:
ollama -v
设置启动服务
# Adding Ollama as a startup service (recommended)
# Create a user and group for Ollama:
sudo useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama
sudo usermod -a -G ollama $(whoami)
# Create a service file in /etc/systemd/system/ollama.service:
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=$PATH"
[Install]
WantedBy=multi-user.target
# Then start the service:
sudo systemctl daemon-reload
sudo systemctl enable ollama
二、遇到的问题
1.Error: listen tcp 127.0.0.1:11434: bind: address already in us
# 命令
ollama serve
# 问题
Error: listen tcp 127.0.0.1:11434: bind: address already in us
查找占用 11434 端口的进程并终止进程
# 查看
sudo lsof -i :11434
# 终止
kill -9 <PID>
# 重新启动
ollama serve
若问题没有解决
root@user-NF5280M6:/home/lzm/Downloads# sudo lsof -i :11434
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ollama 3558609 root 3u IPv4 274643309 0t0 TCP localhost:11434 (LISTEN)
root@user-NF5280M6:/home/lzm/Downloads# kill -9 3558609
root@user-NF5280M6:/home/lzm/Downloads# sudo lsof -i :11434
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ollama 3563660 root 3u IPv4 274711630 0t0 TCP localhost:11434 (LISTEN)
ps aux | grep ollama
# 强制终止所有 Ollama 进程
pkill ollama
2.下载模型超时,无法下载
ollama run deepseek-r1:1.5b
[GIN] 2025/06/27 - 10:42:02 | 200 | 42.146µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/27 - 10:42:02 | 404 | 209.992µs | 127.0.0.1 | POST "/api/show"
pulling manifest ⠏ time=2025-06-27T10:42:12.655+08:00 level=INFO source=images.go:713 msg="request failed: Get \"https://registry.ollama.ai/v2/library/deepseek-r1/manifests/1.5b\": dial tcp: lookup registry.ollama.ai on 127.0.0.53:53: read udp 127.0.0.1:36213->127.0.0.53:53: i/o timeout"
[GIN] 2025/06/27 - 10:42:12 | 200 | 10.00749667s | 127.0.0.1 | POST "/api/pull"
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/deepseek-r1/manifests/1.5b": dial tcp: lookup registry.ollama.ai on 127.0.0.53:53: read udp 127.0.0.1:36213->127.0.0.53:53: i/o timeout
选用国内镜像
阿里云:https://registry.ollama.ai
DeepSeek 官方镜像:https://ollama.deepseek.com
浙江大学镜像站:https://ollama.zju.edu.cn
魔搭社区:https://ollama.modelscope.cn
mkdir -p ~/.ollama
cat << EOF > ~/.ollama/config.json
{
"registry": {
"mirrors": {
"registry.ollama.ai": "https://ollama.deepseek.com"
}
}
}
EOF
sudo systemctl restart ollama
sudo systemctl status ollama
参考链接:Ollma通过国内源实现模型本地化部署_ollama国内镜像源-优快云博客
通义