ollama+deepseek+dify持续更新

部署运行你感兴趣的模型镜像

确认好生产环境DNS

# vi /etc/resolv.conf
# wget https://ollama.com/install.sh
# sh install.sh
>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
curl: (7) Failed to connect to github.com port 443: Connection refused

从错误发现脚本使用curl下载数据
生产环境挂PROXY

# vi install.sh
# export

# export http_proxy=http://172.17.68.68:7890/
# export https_proxy=http://172.17.68.68:7890/
# !sh
sh install.sh

安装完成

>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
##################################################################################################################################################################################################### 100.0%
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service.
>>> NVIDIA GPU installed.

拉模型

# ollama run deepseek-r1:7b

在此发现pull模型很慢, 还需要科学一下

# systemctl edit ollama.service
填入

[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
Environment="HTTPS_PROXY=http://172.17.68.68:7890"


# systemctl daemon-reload

# systemctl restart ollama

重新pull

# ollama run deepseek-r1:7b

安装dify

# docker -v
Docker version 24.0.7, build afdd53b

# git clone https://github.com/langgenius/dify.git
Cloning into 'dify'...
remote: Enumerating objects: 137021, done.
remote: Counting objects: 100% (22075/22075), done.
remote: Compressing objects: 100% (1139/1139), done.
remote: Total 137021 (delta 21476), reused 20954 (delta 20935), pack-reused 114946 (from 2)
Receiving objects: 100% (137021/137021), 68.87 MiB | 8.77 MiB/s, done.
Resolving deltas: 100% (101593/101593), done.
# docker compose up -d
[+] Running 0/9
 ⠸ sandbox Pulling                                                                                                                                                                                       5.4s
 ⠸ weaviate Pulling                                                                                                                                                                                      5.4s
 ⠸ ssrf_proxy Pulling                                                                                                                                                                                    5.4s
 ⠸ web Pulling                                                                                                                                                                                           5.4s
 ⠸ nginx Pulling                                                                                                                                                                                         5.4s
 ⠸ db Pulling                                                                                                                                                                                            5.4s
 ⠸ api Pulling                                                                                                                                                                                           5.4s
 ⠸ worker Pulling                                                                                                                                                                                        5.4s
 ⠸ redis Pulling                           

您可能感兴趣的与本文相关的镜像

Llama Factory

Llama Factory

模型微调
LLama-Factory

LLaMA Factory 是一个简单易用且高效的大型语言模型(Large Language Model)训练与微调平台。通过 LLaMA Factory,可以在无需编写任何代码的前提下,在本地完成上百种预训练模型的微调

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值