vscode报错http://127.0.0.1:5500/11.html 找不到应用程序

本文讲述了作者在尝试修改默认浏览器时遇到的问题,通过电脑管家的浏览器保护功能找到了edge浏览器被锁定的原因,并提供了相应的解决方案,帮助读者解决类似困扰。

在这里插入图片描述

解决方法:修改默认浏览器,但发现每次修改都会闪退,不能修改成功,于是参照在这篇文章(https://blog.youkuaiyun.com/qq_44214671/article/details/107036038)中的方法找到电脑管家-工具箱-浏览器保护,发现里面设置了edge浏览器锁定,修改即可

PS C:\Users\FMR> set OLLAMA_HOST=http://127.0.0.1:11434 >> ollama serve >> time=2025-08-08T21:13:44.739+08:00 level=INFO source=routes.go:1304 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\ollamaModels OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[chrome-extension://* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-08-08T21:13:44.755+08:00 level=INFO source=images.go:477 msg="total blobs: 5" time=2025-08-08T21:13:44.755+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-08T21:13:44.755+08:00 level=INFO source=routes.go:1357 msg="Listening on [::]:11434 (version 0.11.4)" time=2025-08-08T21:13:44.755+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-08T21:13:44.755+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-08-08T21:13:44.755+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=6 time=2025-08-08T21:13:45.584+08:00 level=INFO source=gpu.go:612 msg="Unable to load cudart library C:\\Windows\\system32\\nvcuda.dll: symbol lookup for cuCtxCreate_v3 failed: \xd5Ҳ\xbb\xb5\xbdָ\xb6\xa8\xb5ij\xcc\xd0\xf2\xa1\xa3\r\n" time=2025-08-08T21:13:45.595+08:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-08-08T21:13:45.596+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="7.9 GiB" available="2.4 GiB" time=2025-08-08T21:13:45.596+08:00 level=INFO source=routes.go:1398 msg="entering low vram mode" "total vram"="7.9 GiB" threshold="20.0 GiB"
最新发布
08-09
jrl@jrl-Jiguang15-Series-GM5AR55E:/usr/local/bin/bin$ ollama serve time=2025-07-28T15:32:20.919+08:00 level=INFO source=routes.go:1238 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/jrl/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-07-28T15:32:20.920+08:00 level=INFO source=images.go:476 msg="total blobs: 0" time=2025-07-28T15:32:20.920+08:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0" time=2025-07-28T15:32:20.928+08:00 level=INFO source=routes.go:1291 msg="Listening on 127.0.0.1:11434 (version 0.10.0-rc2)" time=2025-07-28T15:32:20.928+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-07-28T15:32:20.946+08:00 level=INFO source=gpu.go:602 msg="no nvidia devices detected by library /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.07" time=2025-07-28T15:32:20.955+08:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-07-28T15:32:20.955+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="15.4 GiB" available="11.7 GiB" ^Cjrl@jrl-Jiguang15-Series-GM5AR55E:/usr/local/bin/bin$ ollama run deepseek-r1:1.5b Error: ollama server not responding - could not connect to ollama server, run 'ollama serve' to start it 什么情况
07-29
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值