LLama3最新医疗大模型安装与应用指南

为什么要介绍医疗模型,因为平时我们工作繁忙,可能身体不舒服会拖着到不得已的时候才到医院,特别是老年人怕麻烦,拖延更严重。如果有了这些模型,我们可以向这些模型提问,给一个初步的了解,同时也可以获取一些养生保健知识。因此这些模型是比较良心,造福人类的。不过如果对于个人医疗需求,请务必咨询合格的医疗保健提供者。

1.医疗大模型介绍

医疗领域的开源 LLM:OpenBioLLM-Llama3,在生物医学领域优于GPT-4、Gemini、Meditron-70B、Med-PaLM-1、Med-PaLM-2OpenBioLLM-Llama3有两个版本,分别是70B 和 8B

OpenBioLLM-70B提供了SOTA性能,为同等规模模型设立了新的最先进水平

OpenBioLLM-8B模型甚至超越了GPT-3.5、Gemini和Meditron-70B。

  • 医疗-LLM排行榜:https://huggingface.co/spaces/openlifescienceai/open_medical_llm_leaderboard

  • 70B:https://huggingface.co/aaditya/Llama3-OpenBioLLM-70B

  • 8B:https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B

2.安装指南

2.1 下载llama依赖

pip install llama-cpp-python   

安装过程

Collecting llama-cpp-python     Downloading llama_cpp_python-0.2.65.tar.gz (38.0 MB)        ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 38.0/38.0 MB 42.3 MB/s eta 0:00:00     Installing build dependencies ... done     Getting requirements to build wheel ... done     Installing backend dependencies ... done     Preparing metadata (pyproject.toml) ... done   Requirement already satisfied: typing-extensions>=4.5.0 in /usr/local/lib/python3.10/dist-packages (from llama-cpp-python) (4.11.0)   Requirement already satisfied: numpy>=1.20.0 in /usr/local/lib/python3.10/dist-packages (from llama-cpp-python) (1.25.2)   Collecting diskcache>=5.6.1 (from llama-cpp-python)     Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)        ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 6.7 MB/s eta 0:00:00   Requirement already satisfied: jinja2>=2.11.3 in /usr/local/lib/python3.10/dist-packages (from llama-cpp-python) (3.1.3)   Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2>=2.11.3->llama-cpp-python) (2.1.5)   Building wheels for collected packages: llama-cpp-python     Building wheel for llama-cpp-python (pyproject.toml) ... done     Created wheel for llama-cpp-python: filename=llama_cpp_python-0.2.65-cp310-cp310-linux_x86_64.whl size=39397391 sha256=6f91e47e67bea9fd5cae38ebcc05ea19b6c344a1a609a9d497e4e92e026b611a     Stored in directory: /root/.cache/pip/wheels/46/37/bf/f7c65dbafa5b3845795c23b6634863c1fdf0a9f40678de225e   Successfully built llama-cpp-python   Installing collected packages: diskcache, llama-cpp-python   Successfully installed diskcache-5.6.3 llama-cpp-python-0.2.65   

2.2 下载模型

from huggingface_hub import hf_hub_download   from llama_cpp import Llama      model_name = "aaditya/OpenBioLLM-Llama3-8B-GGUF"   model_file = "openbiollm-llama3-8b.Q5_K_M.gguf"      model_path = hf_hub_download(model_name,                                filename=model_file,                                local_dir='/content')   print("My model path: ", model_path)   llm = Llama(model_path=model_path,               n_gpu_layers=-1)   

安装过程

openbiollm-llama3-8b.Q5_K_M.gguf: 100%    5.73G/5.73G [00:15<00:00, 347MB/s]   llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /content/openbiollm-llama3-8b.Q5_K_M.gguf (version GGUF V3 (latest))   llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.   llama_model_loader: - kv   0:                       general.architecture str              = llama   llama_model_loader: - kv   1:                               general.name str              = .   llama_model_loader: - kv   2:                           llama.vocab_size u32              = 128256   llama_model_loader: - kv   3:                       llama.context_length u32              = 8192   llama_model_loader: - kv   4:                     llama.embedding_length u32              &#
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值