oracle 导入数据,报错PLS-00302: component 'SET_NO_OUTLINES' must be declared .

本文讨论了在使用Oracle不同版本导出和导入数据库时遇到的问题,特别是当使用高版本导出的数据尝试导入低版本数据库时,可能会出现无法识别的组件错误。通过重新使用与目标数据库版本相匹配的导出工具成功解决了该问题。

今天将本机的一个导出的库,导入到中一个oracle数据库中,

提示:

EXP-00056: ORACLE error 6550 encountered
ORA-06550: line 1, column 41:
PLS-00302: component 'SET_NO_OUTLINES' must be declared
ORA-06550: line 1, column 15:
PL/SQL: Statement ignored
EXP-00000: Export terminated unsuccessfully

 

以为是命令写错,重新手动敲了一次,还是不行。

上网找到答案:原来是数据库版本不一至,我导出的dmp文件是用oracle10.0.2导出的,但导入的目标数据为是oracle10.0.1,差一个版本也不行啊。后来用oracle10.0.1的客户端重新导出,然后再导入就没有问题了。

 

高版本导出的库,拿到低版本上去用是不行的,而且无解。

 

``` [root@80101f8eab9f mas]# python -m vllm.entrypoints.openai.api_server \ > --model /models/z50051264/medusa-1.0-zephyr-7b-beta \ > --engine-args='{"speculative_model": "/models/z50051264/medusa-1.0-zephyr-7b-beta"}' \ > --max-num-seqs=256 \ > --max-model-len=4096 \ > --max-num-batched-tokens=4096 \ > --tensor-parallel-size=1 \ > --block-size=128 \ > --host=0.0.0.0 \ > --port=8080 \ > --gpu-memory-utilization=0.9 \ > --trust-remote-code \ > --served-model-name=zzz INFO 08-06 07:26:00 [__init__.py:39] Available plugins for group vllm.platform_plugins: INFO 08-06 07:26:00 [__init__.py:41] - ascend -> vllm_ascend:register INFO 08-06 07:26:00 [__init__.py:44] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load. INFO 08-06 07:26:00 [__init__.py:235] Platform plugin ascend is activated WARNING 08-06 07:26:01 [_custom_ops.py:20] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'") INFO 08-06 07:26:04 [importing.py:63] Triton not installed or not compatible; certain GPU-related functions will not be available. WARNING 08-06 07:26:05 [registry.py:413] Model architecture DeepSeekMTPModel is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_mtp:CustomDeepSeekMTP. WARNING 08-06 07:26:05 [registry.py:413] Model architecture Qwen2VLForConditionalGeneration is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen2_vl:AscendQwen2VLForConditionalGeneration. WARNING 08-06 07:26:05 [registry.py:413] Model architecture Qwen2_5_VLForConditionalGeneration is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen2_5_vl:AscendQwen2_5_VLForConditionalGeneration. WARNING 08-06 07:26:05 [registry.py:413] Model architecture DeepseekV2ForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_v2:CustomDeepseekV2ForCausalLM. WARNING 08-06 07:26:05 [registry.py:413] Model architecture DeepseekV3ForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_v2:CustomDeepseekV3ForCausalLM. WARNING 08-06 07:26:05 [registry.py:413] Model architecture Qwen3MoeForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen3_moe:CustomQwen3MoeForCausalLM. usage: api_server.py [-h] [--host HOST] [--port PORT] [--uvicorn-log-level {debug,info,warning,error,critical,trace}] [--disable-uvicorn-access-log] [--allow-credentials] [--allowed-origins ALLOWED_ORIGINS] [--allowed-methods ALLOWED_METHODS] [--allowed-headers ALLOWED_HEADERS] [--api-key API_KEY] [--lora-modules LORA_MODULES [LORA_MODULES ...]] [--prompt-adapters PROMPT_ADAPTERS [PROMPT_ADAPTERS ...]] [--chat-template CHAT_TEMPLATE] [--chat-template-content-format {auto,string,openai}] [--response-role RESPONSE_ROLE] [--ssl-keyfile SSL_KEYFILE] [--ssl-certfile SSL_CERTFILE] [--ssl-ca-certs SSL_CA_CERTS] [--enable-ssl-refresh] [--ssl-cert-reqs SSL_CERT_REQS] [--root-path ROOT_PATH] [--middleware MIDDLEWARE] [--return-tokens-as-token-ids] [--disable-frontend-multiprocessing] [--enable-request-id-headers] [--enable-auto-tool-choice] [--expand-tools-even-if-tool-choice-none] [--tool-call-parser {deepseek_v3,granite-20b-fc,granite,hermes,internlm,jamba,llama4_pythonic,llama4_json,llama3_json,minimax,mistral,phi4_mini_json,pythonic,xlam} or name registered in --tool-parser-plugin] [--tool-parser-plugin TOOL_PARSER_PLUGIN] [--log-config-file LOG_CONFIG_FILE] [--model MODEL] [--task {auto,classify,draft,embed,embedding,generate,reward,score,transcription}] [--tokenizer TOKENIZER] [--tokenizer-mode {auto,custom,mistral,slow}] [--trust-remote-code | --no-trust-remote-code] [--dtype {auto,bfloat16,float,float16,float32,half}] [--seed SEED] [--hf-config-path HF_CONFIG_PATH] [--allowed-local-media-path ALLOWED_LOCAL_MEDIA_PATH] [--revision REVISION] [--code-revision CODE_REVISION] [--rope-scaling ROPE_SCALING] [--rope-theta ROPE_THETA] [--tokenizer-revision TOKENIZER_REVISION] [--max-model-len MAX_MODEL_LEN] [--quantization QUANTIZATION] [--enforce-eager | --no-enforce-eager] [--max-seq-len-to-capture MAX_SEQ_LEN_TO_CAPTURE] [--max-logprobs MAX_LOGPROBS] [--disable-sliding-window | --no-disable-sliding-window] [--disable-cascade-attn | --no-disable-cascade-attn] [--skip-tokenizer-init | --no-skip-tokenizer-init] [--enable-prompt-embeds | --no-enable-prompt-embeds] [--served-model-name SERVED_MODEL_NAME [SERVED_MODEL_NAME ...]] [--disable-async-output-proc] [--config-format {auto,hf,mistral}] [--hf-token [HF_TOKEN]] [--hf-overrides HF_OVERRIDES] [--override-neuron-config OVERRIDE_NEURON_CONFIG] [--override-pooler-config OVERRIDE_POOLER_CONFIG] [--logits-processor-pattern LOGITS_PROCESSOR_PATTERN] [--generation-config GENERATION_CONFIG] [--override-generation-config OVERRIDE_GENERATION_CONFIG] [--enable-sleep-mode | --no-enable-sleep-mode] [--model-impl {auto,vllm,transformers}] [--override-attention-dtype OVERRIDE_ATTENTION_DTYPE] [--load-format {auto,pt,safetensors,npcache,dummy,tensorizer,sharded_state,gguf,bitsandbytes,mistral,runai_streamer,runai_streamer_sharded,fastsafetensors}] [--download-dir DOWNLOAD_DIR] [--model-loader-extra-config MODEL_LOADER_EXTRA_CONFIG] [--ignore-patterns IGNORE_PATTERNS [IGNORE_PATTERNS ...]] [--use-tqdm-on-load | --no-use-tqdm-on-load] [--qlora-adapter-name-or-path QLORA_ADAPTER_NAME_OR_PATH] [--pt-load-map-location PT_LOAD_MAP_LOCATION] [--guided-decoding-backend {auto,guidance,lm-format-enforcer,outlines,xgrammar}] [--guided-decoding-disable-fallback | --no-guided-decoding-disable-fallback] [--guided-decoding-disable-any-whitespace | --no-guided-decoding-disable-any-whitespace] [--guided-decoding-disable-additional-properties | --no-guided-decoding-disable-additional-properties] [--enable-reasoning | --no-enable-reasoning] [--reasoning-parser {deepseek_r1,granite,qwen3}] [--distributed-executor-backend {external_launcher,mp,ray,uni,None}] [--pipeline-parallel-size PIPELINE_PARALLEL_SIZE] [--tensor-parallel-size TENSOR_PARALLEL_SIZE] [--data-parallel-size DATA_PARALLEL_SIZE] [--data-parallel-rank DATA_PARALLEL_RANK] [--data-parallel-size-local DATA_PARALLEL_SIZE_LOCAL] [--data-parallel-address DATA_PARALLEL_ADDRESS] [--data-parallel-rpc-port DATA_PARALLEL_RPC_PORT] [--data-parallel-backend DATA_PARALLEL_BACKEND] [--enable-expert-parallel | --no-enable-expert-parallel] [--enable-eplb | --no-enable-eplb] [--num-redundant-experts NUM_REDUNDANT_EXPERTS] [--eplb-window-size EPLB_WINDOW_SIZE] [--eplb-step-interval EPLB_STEP_INTERVAL] [--eplb-log-balancedness | --no-eplb-log-balancedness] [--max-parallel-loading-workers MAX_PARALLEL_LOADING_WORKERS] [--ray-workers-use-nsight | --no-ray-workers-use-nsight] [--disable-custom-all-reduce | --no-disable-custom-all-reduce] [--worker-cls WORKER_CLS] [--worker-extension-cls WORKER_EXTENSION_CLS] [--enable-multimodal-encoder-data-parallel | --no-enable-multimodal-encoder-data-parallel] [--block-size {1,8,16,32,64,128}] [--gpu-memory-utilization GPU_MEMORY_UTILIZATION] [--swap-space SWAP_SPACE] [--kv-cache-dtype {auto,fp8,fp8_e4m3,fp8_e5m2}] [--num-gpu-blocks-override NUM_GPU_BLOCKS_OVERRIDE] [--enable-prefix-caching | --no-enable-prefix-caching] [--prefix-caching-hash-algo {builtin,sha256}] [--cpu-offload-gb CPU_OFFLOAD_GB] [--calculate-kv-scales | --no-calculate-kv-scales] [--tokenizer-pool-size TOKENIZER_POOL_SIZE] [--tokenizer-pool-type TOKENIZER_POOL_TYPE] [--tokenizer-pool-extra-config TOKENIZER_POOL_EXTRA_CONFIG] [--limit-mm-per-prompt LIMIT_MM_PER_PROMPT] [--media-io-kwargs MEDIA_IO_KWARGS] [--mm-processor-kwargs MM_PROCESSOR_KWARGS] [--disable-mm-preprocessor-cache | --no-disable-mm-preprocessor-cache] [--enable-lora | --no-enable-lora] [--enable-lora-bias | --no-enable-lora-bias] [--max-loras MAX_LORAS] [--max-lora-rank MAX_LORA_RANK] [--lora-extra-vocab-size LORA_EXTRA_VOCAB_SIZE] [--lora-dtype {auto,bfloat16,float16}] [--long-lora-scaling-factors LONG_LORA_SCALING_FACTORS [LONG_LORA_SCALING_FACTORS ...]] [--max-cpu-loras MAX_CPU_LORAS] [--fully-sharded-loras | --no-fully-sharded-loras] [--enable-prompt-adapter | --no-enable-prompt-adapter] [--max-prompt-adapters MAX_PROMPT_ADAPTERS] [--max-prompt-adapter-token MAX_PROMPT_ADAPTER_TOKEN] [--device {auto,cpu,cuda,hpu,neuron,tpu,xpu,None}] [--speculative-config SPECULATIVE_CONFIG] [--show-hidden-metrics-for-version SHOW_HIDDEN_METRICS_FOR_VERSION] [--otlp-traces-endpoint OTLP_TRACES_ENDPOINT] [--collect-detailed-traces {all,model,worker,None} [{all,model,worker,None} ...]] [--max-num-batched-tokens MAX_NUM_BATCHED_TOKENS] [--max-num-seqs MAX_NUM_SEQS] [--max-num-partial-prefills MAX_NUM_PARTIAL_PREFILLS] [--max-long-partial-prefills MAX_LONG_PARTIAL_PREFILLS] [--cuda-graph-sizes CUDA_GRAPH_SIZES [CUDA_GRAPH_SIZES ...]] [--long-prefill-token-threshold LONG_PREFILL_TOKEN_THRESHOLD] [--num-lookahead-slots NUM_LOOKAHEAD_SLOTS] [--scheduler-delay-factor SCHEDULER_DELAY_FACTOR] [--preemption-mode {recompute,swap,None}] [--num-scheduler-steps NUM_SCHEDULER_STEPS] [--multi-step-stream-outputs | --no-multi-step-stream-outputs] [--scheduling-policy {fcfs,priority}] [--enable-chunked-prefill | --no-enable-chunked-prefill] [--disable-chunked-mm-input | --no-disable-chunked-mm-input] [--scheduler-cls SCHEDULER_CLS] [--disable-hybrid-kv-cache-manager | --no-disable-hybrid-kv-cache-manager] [--kv-transfer-config KV_TRANSFER_CONFIG] [--kv-events-config KV_EVENTS_CONFIG] [--compilation-config COMPILATION_CONFIG] [--additional-config ADDITIONAL_CONFIG] [--use-v2-block-manager] [--disable-log-stats] [--disable-log-requests] [--max-log-len MAX_LOG_LEN] [--disable-fastapi-docs] [--enable-prompt-tokens-details] [--enable-force-include-usage] [--enable-server-load-tracking] api_server.py: error: unrecognized arguments: --engine-args={"speculative_model": "/models/z50051264/medusa-1.0-zephyr-7b-beta"} ``` 请分析并给出正确指令。 下面是我用的命令:
08-07
(ai_env) starying@LAPTOP-HT19I0UF:/mnt/f/Programmer/python/Ai_talk$ pip install -r requirements_core.txt Collecting torch==2.3.0 (from -r requirements_core.txt (line 1)) Using cached torch-2.3.0-cp312-cp312-manylinux1_x86_64.whl.metadata (26 kB) Collecting torchaudio (from -r requirements_core.txt (line 2)) Using cached torchaudio-2.9.1-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (6.9 kB) Collecting torchvision (from -r requirements_core.txt (line 3)) Using cached torchvision-0.24.1-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (5.9 kB) Collecting transformers (from -r requirements_core.txt (line 4)) Using cached transformers-4.57.3-py3-none-any.whl.metadata (43 kB) Collecting datasets (from -r requirements_core.txt (line 5)) Using cached datasets-4.4.1-py3-none-any.whl.metadata (19 kB) Collecting tokenizers (from -r requirements_core.txt (line 6)) Using cached tokenizers-0.22.1-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.8 kB) Collecting accelerate (from -r requirements_core.txt (line 7)) Using cached accelerate-1.12.0-py3-none-any.whl.metadata (19 kB) Collecting peft (from -r requirements_core.txt (line 8)) Using cached peft-0.18.0-py3-none-any.whl.metadata (14 kB) Collecting trl (from -r requirements_core.txt (line 9)) Using cached trl-0.25.1-py3-none-any.whl.metadata (11 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.11.2-cp38-abi3-manylinux1_x86_64.whl.metadata (18 kB) Collecting safetensors (from -r requirements_core.txt (line 11)) Using cached safetensors-0.7.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.1 kB) Collecting tiktoken (from -r requirements_core.txt (line 12)) Using cached tiktoken-0.12.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (6.7 kB) Collecting sentencepiece (from -r requirements_core.txt (line 13)) Using cached sentencepiece-0.2.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (10 kB) Collecting bitsandbytes (from -r requirements_core.txt (line 14)) Using cached bitsandbytes-0.48.2-py3-none-manylinux_2_24_x86_64.whl.metadata (10 kB) Collecting numpy (from -r requirements_core.txt (line 15)) Using cached numpy-2.3.5-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (62 kB) Collecting pandas (from -r requirements_core.txt (line 16)) Using cached pandas-2.3.3-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.metadata (91 kB) Collecting scikit-learn (from -r requirements_core.txt (line 17)) Using cached scikit_learn-1.7.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (11 kB) Collecting matplotlib (from -r requirements_core.txt (line 18)) Using cached matplotlib-3.10.7-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (11 kB) Collecting jupyter (from -r requirements_core.txt (line 19)) Using cached jupyter-1.1.1-py2.py3-none-any.whl.metadata (2.0 kB) Collecting ipykernel (from -r requirements_core.txt (line 20)) Using cached ipykernel-7.1.0-py3-none-any.whl.metadata (4.5 kB) Collecting python-dotenv (from -r requirements_core.txt (line 21)) Using cached python_dotenv-1.2.1-py3-none-any.whl.metadata (25 kB) Collecting requests (from -r requirements_core.txt (line 22)) Using cached requests-2.32.5-py3-none-any.whl.metadata (4.9 kB) Collecting tqdm (from -r requirements_core.txt (line 23)) Using cached tqdm-4.67.1-py3-none-any.whl.metadata (57 kB) Collecting typer (from -r requirements_core.txt (line 24)) Using cached typer-0.20.0-py3-none-any.whl.metadata (16 kB) Collecting fastapi (from -r requirements_core.txt (line 25)) Using cached fastapi-0.123.0-py3-none-any.whl.metadata (30 kB) Collecting filelock (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached filelock-3.20.0-py3-none-any.whl.metadata (2.1 kB) Collecting typing-extensions>=4.8.0 (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached typing_extensions-4.15.0-py3-none-any.whl.metadata (3.3 kB) Collecting sympy (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached sympy-1.14.0-py3-none-any.whl.metadata (12 kB) Collecting networkx (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached networkx-3.6-py3-none-any.whl.metadata (6.8 kB) Collecting jinja2 (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB) Collecting fsspec (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached fsspec-2025.10.0-py3-none-any.whl.metadata (10 kB) Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB) Collecting nvidia-cuda-runtime-cu12==12.1.105 (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB) Collecting nvidia-cuda-cupti-cu12==12.1.105 (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB) WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)")': /simple/nvidia-cudnn-cu12/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)")': /simple/nvidia-cudnn-cu12/ Collecting nvidia-cudnn-cu12==8.9.2.26 (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB) Collecting nvidia-cublas-cu12==12.1.3.1 (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB) Collecting nvidia-cufft-cu12==11.0.2.54 (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB) Collecting nvidia-curand-cu12==10.3.2.106 (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB) Collecting nvidia-cusolver-cu12==11.4.5.107 (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB) Collecting nvidia-cusparse-cu12==12.1.0.106 (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB) Collecting nvidia-nccl-cu12==2.20.5 (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl.metadata (1.8 kB) Collecting nvidia-nvtx-cu12==12.1.105 (from torch==2.3.0->-r requirements_core.txt (line 1)) Using cached nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.7 kB) Collecting nvidia-nvjitlink-cu12 (from nvidia-cusolver-cu12==11.4.5.107->torch==2.3.0->-r requirements_core.txt (line 1)) Using cached nvidia_nvjitlink_cu12-12.9.86-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl.metadata (1.7 kB) INFO: pip is looking at multiple versions of torchaudio to determine which version is compatible with other requirements. This could take a while. Collecting torchaudio (from -r requirements_core.txt (line 2)) WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)")': /packages/f0/9c/58b8b49dfba2ae85e41ca86b0c52de45bbbea01987490de219c99c523a58/torchaudio-2.9.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata Downloading torchaudio-2.9.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (6.9 kB) Downloading torchaudio-2.8.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (7.2 kB) WARNING: Connection timed out while downloading. WARNING: Attempting to resume incomplete download (0 bytes/7.2 kB, attempt 1) Downloading torchaudio-2.8.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (7.2 kB) Downloading torchaudio-2.7.1-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (6.6 kB) Downloading torchaudio-2.7.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (6.6 kB) Downloading torchaudio-2.6.0-cp312-cp312-manylinux1_x86_64.whl.metadata (6.6 kB) WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)")': /packages/34/1c/345d11bf492a1414dced70a9572ff1eb2c73013578d24fb4d728a91a09d1/torchaudio-2.5.1-cp312-cp312-manylinux1_x86_64.whl.metadata Downloading torchaudio-2.5.1-cp312-cp312-manylinux1_x86_64.whl.metadata (6.4 kB) Downloading torchaudio-2.5.0-cp312-cp312-manylinux1_x86_64.whl.metadata (6.4 kB) INFO: pip is still looking at multiple versions of torchaudio to determine which version is compatible with other requirements. This could take a while. Downloading torchaudio-2.4.1-cp312-cp312-manylinux1_x86_64.whl.metadata (6.4 kB) Downloading torchaudio-2.4.0-cp312-cp312-manylinux1_x86_64.whl.metadata (6.4 kB) Downloading torchaudio-2.3.1-cp312-cp312-manylinux1_x86_64.whl.metadata (6.4 kB) Downloading torchaudio-2.3.0-cp312-cp312-manylinux1_x86_64.whl.metadata (6.4 kB) INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while. Collecting torchvision (from -r requirements_core.txt (line 3)) Downloading torchvision-0.24.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (5.9 kB) Downloading torchvision-0.23.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (6.1 kB) Downloading torchvision-0.22.1-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (6.1 kB) Downloading torchvision-0.22.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (6.1 kB) Downloading torchvision-0.21.0-cp312-cp312-manylinux1_x86_64.whl.metadata (6.1 kB) Downloading torchvision-0.20.1-cp312-cp312-manylinux1_x86_64.whl.metadata (6.1 kB) Downloading torchvision-0.20.0-cp312-cp312-manylinux1_x86_64.whl.metadata (6.1 kB) INFO: pip is still looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while. Downloading torchvision-0.19.1-cp312-cp312-manylinux1_x86_64.whl.metadata (6.0 kB) Downloading torchvision-0.19.0-cp312-cp312-manylinux1_x86_64.whl.metadata (6.0 kB) Downloading torchvision-0.18.1-cp312-cp312-manylinux1_x86_64.whl.metadata (6.6 kB) Downloading torchvision-0.18.0-cp312-cp312-manylinux1_x86_64.whl.metadata (6.6 kB) Collecting pillow!=8.3.*,>=5.3.0 (from torchvision->-r requirements_core.txt (line 3)) Using cached pillow-12.0.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (8.8 kB) Collecting huggingface-hub<1.0,>=0.34.0 (from transformers->-r requirements_core.txt (line 4)) Using cached huggingface_hub-0.36.0-py3-none-any.whl.metadata (14 kB) Collecting packaging>=20.0 (from transformers->-r requirements_core.txt (line 4)) Using cached packaging-25.0-py3-none-any.whl.metadata (3.3 kB) Collecting pyyaml>=5.1 (from transformers->-r requirements_core.txt (line 4)) Using cached pyyaml-6.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (2.4 kB) Collecting regex!=2019.12.17 (from transformers->-r requirements_core.txt (line 4)) Using cached regex-2025.11.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (40 kB) Collecting hf-xet<2.0.0,>=1.1.3 (from huggingface-hub<1.0,>=0.34.0->transformers->-r requirements_core.txt (line 4)) Using cached hf_xet-1.2.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.9 kB) Collecting pyarrow>=21.0.0 (from datasets->-r requirements_core.txt (line 5)) Using cached pyarrow-22.0.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (3.2 kB) Collecting dill<0.4.1,>=0.3.0 (from datasets->-r requirements_core.txt (line 5)) Using cached dill-0.4.0-py3-none-any.whl.metadata (10 kB) Collecting httpx<1.0.0 (from datasets->-r requirements_core.txt (line 5)) Using cached httpx-0.28.1-py3-none-any.whl.metadata (7.1 kB) Collecting xxhash (from datasets->-r requirements_core.txt (line 5)) Using cached xxhash-3.6.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (13 kB) Collecting multiprocess<0.70.19 (from datasets->-r requirements_core.txt (line 5)) Using cached multiprocess-0.70.18-py312-none-any.whl.metadata (7.5 kB) Collecting aiohttp!=4.0.0a0,!=4.0.0a1 (from fsspec[http]<=2025.10.0,>=2023.1.0->datasets->-r requirements_core.txt (line 5)) Using cached aiohttp-3.13.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (8.1 kB) Collecting anyio (from httpx<1.0.0->datasets->-r requirements_core.txt (line 5)) Using cached anyio-4.12.0-py3-none-any.whl.metadata (4.3 kB) Collecting certifi (from httpx<1.0.0->datasets->-r requirements_core.txt (line 5)) Using cached certifi-2025.11.12-py3-none-any.whl.metadata (2.5 kB) Collecting httpcore==1.* (from httpx<1.0.0->datasets->-r requirements_core.txt (line 5)) Using cached httpcore-1.0.9-py3-none-any.whl.metadata (21 kB) Collecting idna (from httpx<1.0.0->datasets->-r requirements_core.txt (line 5)) Using cached idna-3.11-py3-none-any.whl.metadata (8.4 kB) Collecting h11>=0.16 (from httpcore==1.*->httpx<1.0.0->datasets->-r requirements_core.txt (line 5)) Using cached h11-0.16.0-py3-none-any.whl.metadata (8.3 kB) Collecting psutil (from accelerate->-r requirements_core.txt (line 7)) Using cached psutil-7.1.3-cp36-abi3-manylinux2010_x86_64.manylinux_2_12_x86_64.manylinux_2_28_x86_64.whl.metadata (23 kB) Collecting cachetools (from vllm->-r requirements_core.txt (line 10)) Using cached cachetools-6.2.2-py3-none-any.whl.metadata (5.6 kB) Collecting blake3 (from vllm->-r requirements_core.txt (line 10)) Using cached blake3-1.0.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.6 kB) Collecting py-cpuinfo (from vllm->-r requirements_core.txt (line 10)) Using cached py_cpuinfo-9.0.0-py3-none-any.whl.metadata (794 bytes) Collecting protobuf (from vllm->-r requirements_core.txt (line 10)) Using cached protobuf-6.33.1-cp39-abi3-manylinux2014_x86_64.whl.metadata (593 bytes) Collecting openai>=1.99.1 (from vllm->-r requirements_core.txt (line 10)) Using cached openai-2.8.1-py3-none-any.whl.metadata (29 kB) Collecting pydantic>=2.12.0 (from vllm->-r requirements_core.txt (line 10)) Using cached pydantic-2.12.5-py3-none-any.whl.metadata (90 kB) Collecting prometheus_client>=0.18.0 (from vllm->-r requirements_core.txt (line 10)) Using cached prometheus_client-0.23.1-py3-none-any.whl.metadata (1.9 kB) Collecting prometheus-fastapi-instrumentator>=7.0.0 (from vllm->-r requirements_core.txt (line 10)) Using cached prometheus_fastapi_instrumentator-7.1.0-py3-none-any.whl.metadata (13 kB) Collecting lm-format-enforcer==0.11.3 (from vllm->-r requirements_core.txt (line 10)) Using cached lm_format_enforcer-0.11.3-py3-none-any.whl.metadata (17 kB) Collecting llguidance<1.4.0,>=1.3.0 (from vllm->-r requirements_core.txt (line 10)) Using cached llguidance-1.3.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (10 kB) Collecting outlines_core==0.2.11 (from vllm->-r requirements_core.txt (line 10)) Using cached outlines_core-0.2.11-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.8 kB) Collecting diskcache==5.6.3 (from vllm->-r requirements_core.txt (line 10)) Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB) Collecting lark==1.2.2 (from vllm->-r requirements_core.txt (line 10)) Using cached lark-1.2.2-py3-none-any.whl.metadata (1.8 kB) Collecting xgrammar==0.1.25 (from vllm->-r requirements_core.txt (line 10)) Using cached xgrammar-0.1.25-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.5 kB) Collecting partial-json-parser (from vllm->-r requirements_core.txt (line 10)) Using cached partial_json_parser-0.2.1.1.post7-py3-none-any.whl.metadata (6.1 kB) Collecting pyzmq>=25.0.0 (from vllm->-r requirements_core.txt (line 10)) Using cached pyzmq-27.1.0-cp312-abi3-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl.metadata (6.0 kB) Collecting msgspec (from vllm->-r requirements_core.txt (line 10)) Using cached msgspec-0.20.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (5.5 kB) Collecting gguf>=0.13.0 (from vllm->-r requirements_core.txt (line 10)) Using cached gguf-0.17.1-py3-none-any.whl.metadata (4.3 kB) Collecting mistral_common>=1.8.5 (from mistral_common[image]>=1.8.5->vllm->-r requirements_core.txt (line 10)) Using cached mistral_common-1.8.6-py3-none-any.whl.metadata (5.3 kB) Collecting opencv-python-headless>=4.11.0 (from vllm->-r requirements_core.txt (line 10)) Using cached opencv_python_headless-4.12.0.88-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (19 kB) Collecting six>=1.16.0 (from vllm->-r requirements_core.txt (line 10)) Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB) Collecting setuptools<81.0.0,>=77.0.3 (from vllm->-r requirements_core.txt (line 10)) Using cached setuptools-80.9.0-py3-none-any.whl.metadata (6.6 kB) Collecting einops (from vllm->-r requirements_core.txt (line 10)) Using cached einops-0.8.1-py3-none-any.whl.metadata (13 kB) Collecting compressed-tensors==0.12.2 (from vllm->-r requirements_core.txt (line 10)) Using cached compressed_tensors-0.12.2-py3-none-any.whl.metadata (7.0 kB) Collecting depyf==0.20.0 (from vllm->-r requirements_core.txt (line 10)) Using cached depyf-0.20.0-py3-none-any.whl.metadata (7.3 kB) Collecting cloudpickle (from vllm->-r requirements_core.txt (line 10)) Using cached cloudpickle-3.1.2-py3-none-any.whl.metadata (7.1 kB) Collecting watchfiles (from vllm->-r requirements_core.txt (line 10)) Using cached watchfiles-1.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.9 kB) Collecting python-json-logger (from vllm->-r requirements_core.txt (line 10)) Using cached python_json_logger-4.0.0-py3-none-any.whl.metadata (4.0 kB) Collecting scipy (from vllm->-r requirements_core.txt (line 10)) Using cached scipy-1.16.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (62 kB) Collecting ninja (from vllm->-r requirements_core.txt (line 10)) Using cached ninja-1.13.0-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (5.1 kB) Collecting pybase64 (from vllm->-r requirements_core.txt (line 10)) Using cached pybase64-1.4.2-cp312-cp312-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl.metadata (8.7 kB) Collecting cbor2 (from vllm->-r requirements_core.txt (line 10)) Using cached cbor2-5.7.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (5.4 kB) Collecting setproctitle (from vllm->-r requirements_core.txt (line 10)) Using cached setproctitle-1.3.7-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl.metadata (10 kB) Collecting openai-harmony>=0.0.3 (from vllm->-r requirements_core.txt (line 10)) Using cached openai_harmony-0.0.8-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.0 kB) Collecting anthropic==0.71.0 (from vllm->-r requirements_core.txt (line 10)) Using cached anthropic-0.71.0-py3-none-any.whl.metadata (28 kB) Collecting model-hosting-container-standards<1.0.0 (from vllm->-r requirements_core.txt (line 10)) Using cached model_hosting_container_standards-0.1.9-py3-none-any.whl.metadata (24 kB) Collecting numba==0.61.2 (from vllm->-r requirements_core.txt (line 10)) Using cached numba-0.61.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.8 kB) Collecting ray>=2.48.0 (from ray[cgraph]>=2.48.0->vllm->-r requirements_core.txt (line 10)) Using cached ray-2.52.1-cp312-cp312-manylinux2014_x86_64.whl.metadata (21 kB) INFO: pip is looking at multiple versions of vllm to determine which version is compatible with other requirements. This could take a while. Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.11.1-cp38-abi3-manylinux1_x86_64.whl.metadata (18 kB) Using cached vllm-0.11.0-cp38-abi3-manylinux1_x86_64.whl.metadata (17 kB) Collecting llguidance<0.8.0,>=0.7.11 (from vllm->-r requirements_core.txt (line 10)) Using cached llguidance-0.7.30-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (10 kB) Collecting setuptools<80,>=77.0.3 (from vllm->-r requirements_core.txt (line 10)) Using cached setuptools-79.0.1-py3-none-any.whl.metadata (6.5 kB) Collecting compressed-tensors==0.11.0 (from vllm->-r requirements_core.txt (line 10)) Using cached compressed_tensors-0.11.0-py3-none-any.whl.metadata (7.0 kB) Collecting depyf==0.19.0 (from vllm->-r requirements_core.txt (line 10)) Using cached depyf-0.19.0-py3-none-any.whl.metadata (7.3 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.10.2-cp38-abi3-manylinux1_x86_64.whl.metadata (16 kB) Collecting xgrammar==0.1.23 (from vllm->-r requirements_core.txt (line 10)) Using cached xgrammar-0.1.23-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.4 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.10.1.1-cp38-abi3-manylinux1_x86_64.whl.metadata (15 kB) Collecting lm-format-enforcer<0.11,>=0.10.11 (from vllm->-r requirements_core.txt (line 10)) Using cached lm_format_enforcer-0.10.12-py3-none-any.whl.metadata (17 kB) Collecting outlines_core==0.2.10 (from vllm->-r requirements_core.txt (line 10)) Using cached outlines_core-0.2.10-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.8 kB) Collecting xgrammar==0.1.21 (from vllm->-r requirements_core.txt (line 10)) Using cached xgrammar-0.1.21-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.3 kB) Collecting compressed-tensors==0.10.2 (from vllm->-r requirements_core.txt (line 10)) Using cached compressed_tensors-0.10.2-py3-none-any.whl.metadata (7.0 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.10.1-cp38-abi3-manylinux1_x86_64.whl.metadata (15 kB) Using cached vllm-0.10.0-cp38-abi3-manylinux1_x86_64.whl.metadata (14 kB) Collecting openai<=1.90.0,>=1.87.0 (from vllm->-r requirements_core.txt (line 10)) Using cached openai-1.90.0-py3-none-any.whl.metadata (26 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.9.2-cp38-abi3-manylinux1_x86_64.whl.metadata (15 kB) Collecting outlines==0.1.11 (from vllm->-r requirements_core.txt (line 10)) Using cached outlines-0.1.11-py3-none-any.whl.metadata (17 kB) Collecting xgrammar==0.1.19 (from vllm->-r requirements_core.txt (line 10)) Using cached xgrammar-0.1.19-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.6 kB) Collecting depyf==0.18.0 (from vllm->-r requirements_core.txt (line 10)) Using cached depyf-0.18.0-py3-none-any.whl.metadata (7.1 kB) INFO: pip is still looking at multiple versions of vllm to determine which version is compatible with other requirements. This could take a while. Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.9.1-cp38-abi3-manylinux1_x86_64.whl.metadata (15 kB) Collecting compressed-tensors==0.10.1 (from vllm->-r requirements_core.txt (line 10)) Using cached compressed_tensors-0.10.1-py3-none-any.whl.metadata (7.0 kB) Collecting opentelemetry-sdk>=1.26.0 (from vllm->-r requirements_core.txt (line 10)) Using cached opentelemetry_sdk-1.38.0-py3-none-any.whl.metadata (1.5 kB) Collecting opentelemetry-api>=1.26.0 (from vllm->-r requirements_core.txt (line 10)) Using cached opentelemetry_api-1.38.0-py3-none-any.whl.metadata (1.5 kB) Collecting opentelemetry-exporter-otlp>=1.26.0 (from vllm->-r requirements_core.txt (line 10)) Using cached opentelemetry_exporter_otlp-1.38.0-py3-none-any.whl.metadata (2.4 kB) Collecting opentelemetry-semantic-conventions-ai>=0.4.1 (from vllm->-r requirements_core.txt (line 10)) Using cached opentelemetry_semantic_conventions_ai-0.4.13-py3-none-any.whl.metadata (1.1 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.9.0.1-cp38-abi3-manylinux1_x86_64.whl.metadata (15 kB) Collecting compressed-tensors==0.9.4 (from vllm->-r requirements_core.txt (line 10)) Using cached compressed_tensors-0.9.4-py3-none-any.whl.metadata (7.0 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.9.0-cp38-abi3-manylinux1_x86_64.whl.metadata (15 kB) Using cached vllm-0.8.5.post1-cp38-abi3-manylinux1_x86_64.whl.metadata (14 kB) Collecting xgrammar==0.1.18 (from vllm->-r requirements_core.txt (line 10)) Using cached xgrammar-0.1.18-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.6 kB) Collecting importlib_metadata (from vllm->-r requirements_core.txt (line 10)) Using cached importlib_metadata-8.7.0-py3-none-any.whl.metadata (4.8 kB) Collecting compressed-tensors==0.9.3 (from vllm->-r requirements_core.txt (line 10)) Using cached compressed_tensors-0.9.3-py3-none-any.whl.metadata (7.0 kB) Collecting opentelemetry-sdk<1.27.0,>=1.26.0 (from vllm->-r requirements_core.txt (line 10)) Using cached opentelemetry_sdk-1.26.0-py3-none-any.whl.metadata (1.5 kB) Collecting opentelemetry-api<1.27.0,>=1.26.0 (from vllm->-r requirements_core.txt (line 10)) Using cached opentelemetry_api-1.26.0-py3-none-any.whl.metadata (1.4 kB) Collecting opentelemetry-exporter-otlp<1.27.0,>=1.26.0 (from vllm->-r requirements_core.txt (line 10)) Using cached opentelemetry_exporter_otlp-1.26.0-py3-none-any.whl.metadata (2.3 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.8.5-cp38-abi3-manylinux1_x86_64.whl.metadata (14 kB) INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C. Using cached vllm-0.8.4-cp38-abi3-manylinux1_x86_64.whl.metadata (27 kB) Using cached vllm-0.8.3-cp38-abi3-manylinux1_x86_64.whl.metadata (27 kB) Collecting xgrammar==0.1.17 (from vllm->-r requirements_core.txt (line 10)) Using cached xgrammar-0.1.17-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.5 kB) Collecting gguf==0.10.0 (from vllm->-r requirements_core.txt (line 10)) Using cached gguf-0.10.0-py3-none-any.whl.metadata (3.5 kB) Collecting compressed-tensors==0.9.2 (from vllm->-r requirements_core.txt (line 10)) Using cached compressed_tensors-0.9.2-py3-none-any.whl.metadata (7.0 kB) Collecting numba==0.61 (from vllm->-r requirements_core.txt (line 10)) Using cached numba-0.61.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.8 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.8.2-cp38-abi3-manylinux1_x86_64.whl.metadata (27 kB) Collecting numpy (from -r requirements_core.txt (line 15)) Using cached numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB) Collecting xgrammar==0.1.16 (from vllm->-r requirements_core.txt (line 10)) Using cached xgrammar-0.1.16-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.3 kB) Collecting numba==0.60.0 (from vllm->-r requirements_core.txt (line 10)) Using cached numba-0.60.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.7 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.8.1-cp38-abi3-manylinux1_x86_64.whl.metadata (26 kB) Using cached vllm-0.8.0-cp38-abi3-manylinux1_x86_64.whl.metadata (26 kB) Using cached vllm-0.7.3-cp38-abi3-manylinux1_x86_64.whl.metadata (25 kB) Collecting xgrammar==0.1.11 (from vllm->-r requirements_core.txt (line 10)) Using cached xgrammar-0.1.11-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (2.0 kB) Collecting compressed-tensors==0.9.1 (from vllm->-r requirements_core.txt (line 10)) Using cached compressed_tensors-0.9.1-py3-none-any.whl.metadata (6.8 kB) Collecting ray==2.40.0 (from ray[adag]==2.40.0->vllm->-r requirements_core.txt (line 10)) Using cached ray-2.40.0-cp312-cp312-manylinux2014_x86_64.whl.metadata (17 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.7.2-cp38-abi3-manylinux1_x86_64.whl.metadata (12 kB) Collecting uvicorn[standard] (from vllm->-r requirements_core.txt (line 10)) Using cached uvicorn-0.38.0-py3-none-any.whl.metadata (6.8 kB) Collecting xgrammar>=0.1.6 (from vllm->-r requirements_core.txt (line 10)) Using cached xgrammar-0.1.27-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.8 kB) Collecting nvidia-ml-py>=12.560.30 (from vllm->-r requirements_core.txt (line 10)) Using cached nvidia_ml_py-13.580.82-py3-none-any.whl.metadata (9.6 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.7.1-cp38-abi3-manylinux1_x86_64.whl.metadata (12 kB) Collecting compressed-tensors==0.9.0 (from vllm->-r requirements_core.txt (line 10)) Using cached compressed_tensors-0.9.0-py3-none-any.whl.metadata (6.8 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.7.0-cp38-abi3-manylinux1_x86_64.whl.metadata (12 kB) Using cached vllm-0.6.6.post1-cp38-abi3-manylinux1_x86_64.whl.metadata (11 kB) Collecting compressed-tensors==0.8.1 (from vllm->-r requirements_core.txt (line 10)) Using cached compressed_tensors-0.8.1-py3-none-any.whl.metadata (6.8 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.6.6-cp38-abi3-manylinux1_x86_64.whl.metadata (11 kB) Using cached vllm-0.6.5-cp38-abi3-manylinux1_x86_64.whl.metadata (11 kB) Using cached vllm-0.6.4.post1-cp38-abi3-manylinux1_x86_64.whl.metadata (10 kB) Collecting outlines<0.1,>=0.0.43 (from vllm->-r requirements_core.txt (line 10)) Using cached outlines-0.0.46-py3-none-any.whl.metadata (15 kB) Collecting compressed-tensors==0.8.0 (from vllm->-r requirements_core.txt (line 10)) Using cached compressed_tensors-0.8.0-py3-none-any.whl.metadata (6.8 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.6.4-cp38-abi3-manylinux1_x86_64.whl.metadata (10 kB) Collecting lm-format-enforcer==0.10.6 (from vllm->-r requirements_core.txt (line 10)) Using cached lm_format_enforcer-0.10.6-py3-none-any.whl.metadata (16 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.6.3.post1-cp38-abi3-manylinux1_x86_64.whl.metadata (10 kB) Collecting compressed-tensors==0.6.0 (from vllm->-r requirements_core.txt (line 10)) Using cached compressed_tensors-0.6.0-py3-none-any.whl.metadata (6.8 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.6.3-cp38-abi3-manylinux1_x86_64.whl.metadata (10 kB) Using cached vllm-0.6.2-cp38-abi3-manylinux1_x86_64.whl.metadata (2.4 kB) Using cached vllm-0.6.1.post2-cp38-abi3-manylinux1_x86_64.whl.metadata (2.4 kB) Collecting gguf==0.9.1 (from vllm->-r requirements_core.txt (line 10)) Using cached gguf-0.9.1-py3-none-any.whl.metadata (3.3 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.6.1.post1-cp38-abi3-manylinux1_x86_64.whl.metadata (2.3 kB) Using cached vllm-0.6.1-cp38-abi3-manylinux1_x86_64.whl.metadata (2.3 kB) Using cached vllm-0.6.0-cp38-abi3-manylinux1_x86_64.whl.metadata (2.2 kB) Using cached vllm-0.5.5-cp38-abi3-manylinux1_x86_64.whl.metadata (2.0 kB) Collecting librosa (from vllm->-r requirements_core.txt (line 10)) Using cached librosa-0.11.0-py3-none-any.whl.metadata (8.7 kB) Collecting soundfile (from vllm->-r requirements_core.txt (line 10)) Using cached soundfile-0.13.1-py2.py3-none-manylinux_2_28_x86_64.whl.metadata (16 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.5.4-cp38-abi3-manylinux1_x86_64.whl.metadata (1.8 kB) Collecting cmake>=3.21 (from vllm->-r requirements_core.txt (line 10)) Using cached cmake-4.2.0-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (6.5 kB) Collecting lm-format-enforcer==0.10.3 (from vllm->-r requirements_core.txt (line 10)) Using cached lm_format_enforcer-0.10.3-py3-none-any.whl.metadata (16 kB) Collecting vllm (from -r requirements_core.txt (line 10)) Using cached vllm-0.5.3.post1-cp38-abi3-manylinux1_x86_64.whl.metadata (1.8 kB) Using cached vllm-0.5.3-cp38-abi3-manylinux1_x86_64.whl.metadata (1.8 kB) Using cached vllm-0.5.2-cp38-abi3-manylinux1_x86_64.whl.metadata (1.8 kB) Using cached vllm-0.5.1.tar.gz (790 kB) Installing build dependencies ... |
最新发布
12-02
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值