linux 命令访问url: curl http://www.baidu.com/index.html

命令行网页浏览与下载
本文介绍了使用命令行工具如 elinks、wget、curl 和 lynx 来浏览和下载网页内容的方法。这些工具适用于服务器环境或对轻量级浏览器的需求场景。

1.elinks - lynx-like替代角色模式WWW的浏览器

例如:

 elinks --dump http://www.baidu.com

2.wget 这个会将访问的首页下载到本地

[root@el5-mq2 ~]# wget http://www.baidu.com
--2011-10-17 16:30:10--  http://www.baidu.com/
Resolving www.baidu.com... 119.75.218.45, 119.75.217.56
Connecting to www.baidu.com|119.75.218.45|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 8403 (8.2K) [text/html]
Saving to: `index.html'

100%[==========================================================================================>] 8,403       --.-K/s   in 0.01s  

2011-10-17 16:30:10 (648 KB/s) - `index.html' saved [8403/8403]

3.curl会显示出源码

curl http://www.baidu.com/index.html

4.lynx(这个以前在群里面见有人讨论过,但是没有尝试过,想用的话还需要下载软件)

lynx http://www.baidu.com

 

# Kunlunxin XPU ## Requirements - OS: Linux - Python: 3.10 - XPU Model: P800 - XPU Driver Version: ≥ 5.0.21.10 - XPU Firmware Version: ≥ 1.31 Verified platform: - CPU: INTEL(R) XEON(R) PLATINUM 8563C / Hygon C86-4G 7490 64-core Processor - Memory: 2T - Disk: 4T - OS: CentOS release 7.6 (Final) - Python: 3.10 - XPU Model: P800 (OAM Edition) - XPU Driver Version: 5.0.21.10 - XPU Firmware Version: 1.31 **Note:** Currently, only INTEL or Hygon CPU-based P800 (OAM Edition) servers have been verified. Other CPU types and P800 (PCIe Edition) servers have not been tested yet. ## 1. Set up using Docker (Recommended) ```bash mkdir Work cd Work docker pull ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/fastdeploy-xpu:2.0.0 docker run --name fastdeploy-xpu --net=host -itd --privileged -v $PWD:/Work -w /Work \ ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/fastdeploy-xpu:2.0.0 \ /bin/bash docker exec -it fastdeploy-xpu /bin/bash ``` ## 2. Set up using pre-built wheels ### Install PaddlePaddle ```bash python -m pip install paddlepaddle-xpu==3.1.0 -i https://www.paddlepaddle.org.cn/packages/stable/xpu-p800/ ``` Alternatively, you can install the latest version of PaddlePaddle (Not recommended) ```bash python -m pip install --pre paddlepaddle-xpu -i https://www.paddlepaddle.org.cn/packages/nightly/xpu-p800/ ``` ### Install FastDeploy (**Do NOT install via PyPI source**) ```bash python -m pip install fastdeploy-xpu==2.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/fastdeploy-xpu-p800/ --extra-index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple ``` Alternatively, you can install the latest version of FastDeploy (Not recommended) ```bash python -m pip install --pre fastdeploy-xpu -i https://www.paddlepaddle.org.cn/packages/stable/fastdeploy-xpu-p800/ --extra-index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple ``` ## 3. Build wheel from source ### Install PaddlePaddle ```bash python -m pip install paddlepaddle-xpu==3.1.0 -i https://www.paddlepaddle.org.cn/packages/stable/xpu-p800/ ``` Alternatively, you can install the latest version of PaddlePaddle (Not recommended) ```bash python -m pip install --pre paddlepaddle-xpu -i https://www.paddlepaddle.org.cn/packages/nightly/xpu-p800/ ``` ### Download Kunlunxin Toolkit (XTDK) and XVLLM library, then set their paths. ```bash # XTDK wget https://klx-sdk-release-public.su.bcebos.com/xtdk_15fusion/dev/3.2.40.1/xtdk-llvm15-ubuntu2004_x86_64.tar.gz tar -xvf xtdk-llvm15-ubuntu2004_x86_64.tar.gz && mv xtdk-llvm15-ubuntu2004_x86_64 xtdk export CLANG_PATH=$(pwd)/xtdk # XVLLM wget https://klx-sdk-release-public.su.bcebos.com/xinfer/daily/eb/20250624/output.tar.gz tar -xvf output.tar.gz && mv output xvllm export XVLLM_PATH=$(pwd)/xvllm ``` Alternatively, you can download the latest versions of XTDK and XVLLM (Not recommended) ```bash XTDK: https://klx-sdk-release-public.su.bcebos.com/xtdk_15fusion/dev/latest/xtdk-llvm15-ubuntu2004_x86_64.tar.gz XVLLM: https://klx-sdk-release-public.su.bcebos.com/xinfer/daily/eb/latest/output.tar.gz ``` ### Download FastDeploy source code, checkout the stable branch/TAG, then compile and install. ```bash git clone https://github.com/PaddlePaddle/FastDeploy cd FastDeploy bash build.sh ``` The compiled outputs will be located in the ```FastDeploy/dist``` directory. ## Installation verification ```bash python -c "import paddle; paddle.version.show()" python -c "import paddle; paddle.utils.run_check()" python -c "from paddle.jit.marker import unified" python -c "from fastdeploy.model_executor.ops.xpu import block_attn" ``` If all the above steps execute successfully, FastDeploy is installed correctly. ## Quick start The P800 supports the deployment of the ```ERNIE-4.5-300B-A47B-Paddle``` model using the following configurations (Note: Different configurations may result in variations in performance). - 32K WINT4 with 8 XPUs (Recommended) - 128K WINT4 with 8 XPUs - 32K WINT4 with 4 XPUs ### Online serving (OpenAI API-Compatible server) Deploy an OpenAI API-compatible server using FastDeploy with the following commands: #### Start service **Deploy the ERNIE-4.5-300B-A47B-Paddle model with WINT4 precision and 32K context length on 8 XPUs(Recommended)** ```bash python -m fastdeploy.entrypoints.openai.api_server \ --model baidu/ERNIE-4.5-300B-A47B-Paddle \ --port 8188 \ --tensor-parallel-size 8 \ --max-model-len 32768 \ --max-num-seqs 64 \ --quantization "wint4" \ --gpu-memory-utilization 0.9 ``` **Deploy the ERNIE-4.5-300B-A47B-Paddle model with WINT4 precision and 128K context length on 8 XPUs** ```bash python -m fastdeploy.entrypoints.openai.api_server \ --model baidu/ERNIE-4.5-300B-A47B-Paddle \ --port 8188 \ --tensor-parallel-size 8 \ --max-model-len 131072 \ --max-num-seqs 64 \ --quantization "wint4" \ --gpu-memory-utilization 0.9 ``` **Deploy the ERNIE-4.5-300B-A47B-Paddle model with WINT4 precision and 32K context length on 4 XPUs** ```bash export XPU_VISIBLE_DEVICES="0,1,2,3" python -m fastdeploy.entrypoints.openai.api_server \ --model baidu/ERNIE-4.5-300B-A47B-Paddle \ --port 8188 \ --tensor-parallel-size 4 \ --max-model-len 32768 \ --max-num-seqs 64 \ --quantization "wint4" \ --gpu-memory-utilization 0.9 ``` Refer to [Parameters](../../parameters.md) for more options. #### Send requests Send requests using either curl or Python ```bash curl -X POST "http://0.0.0.0:8188/v1/chat/completions" \ -H "Content-Type: application/json" \ -d '{ "messages": [ {"role": "user", "content": "Where is the capital of China?"} ] }' ``` ```python import openai host = "0.0.0.0" port = "8188" client = openai.Client(base_url=f"http://{host}:{port}/v1", api_key="null") response = client.completions.create( model="null", prompt="Where is the capital of China?", stream=True, ) for chunk in response: print(chunk.choices[0].text, end='') print('\n') response = client.chat.completions.create( model="null", messages=[ {"role": "user", "content": "Where is the capital of China?"}, ], stream=True, ) for chunk in response: if chunk.choices[0].delta: print(chunk.choices[0].delta.content, end='') print('\n') ``` For detailed OpenAI protocol specifications, see [OpenAI Chat Compeltion API](https://platform.openai.com/docs/api-reference/chat/create). Differences from the standard OpenAI protocol are documented in [OpenAI Protocol-Compatible API Server](../../online_serving/README.md).翻译其中信息
07-03
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值