Android Multi-Core Enable/Disable

本文介绍了如何通过配置文件在Android设备上支持热插拔CPU,并提供了使用root权限执行特定命令来启用或禁用多核处理器及调整CPU频率的方法。
Android Multi-Core Enable/Disable
Actually support for Hot Plug-gable CPU's added by Config file.


Config Option is below:
CONFIG_HOTPLUG_CPU=y
or # CONFIG_HOTPLUG_CPU is not set
Check for more details

If your device has root permission, you can execute below commands for the above.
adb shell stop mpdecision 
adb shell "echo 1 > /sys/devices/system/cpu/cpu1/online" 
adb shell "echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor" 
adb shell "echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor"
运行结果 INFO: Created TensorFlow Lite XNNPACK delegate for CPU. ffmpeg version 2.8.17-0ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.12) 20160609 configuration: --prefix=/usr --extra-version=0ubuntu0.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv libavutil 54. 31.100 / 54. 31.100 libavcodec 56. 60.100 / 56. 60.100 libavformat 56. 40.101 / 56. 40.101 libavdevice 56. 4.100 / 56. 4.100 libavfilter 5. 40.101 / 5. 40.101 libavresample 2. 1. 0 / 2. 1. 0 libswscale 3. 1.101 / 3. 1.101 libswresample 1. 2.101 / 1. 2.101 libpostproc 53. 3.100 / 53. 3.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/data/workspace/myshixun/Task2/output.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 encoder : Lavf58.26.101 Duration: 00:00:03.47, start: 0.000000, bitrate: 2525 kb/s Stream #0:0(und): Video: mpeg4 (Simple Profile) (mp4v / 0x7634706D), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 2523 kb/s, 30 fps, 30 tbr, 15360 tbn, 30 tbc (default) Metadata: handler_name : VideoHandler [libx264 @ 0x7a83a0] using SAR=1/1 [libx264 @ 0x7a83a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2 [libx264 @ 0x7a83a0] profile High, level 3.1 [libx264 @ 0x7a83a0] 264 - core 148 r2643 5c65704 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=18 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to '/data/workspace/myshixun/Task2/result/output.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 encoder : Lavf56.40.101 Stream #0:0(und): Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 30 fps, 15360 tbn, 30 tbc (default) Metadata: handler_name : VideoHandler encoder : Lavc56.60.100 libx264 Stream mapping: Stream #0:0 -> #0:0 (mpeg4 (native) -> h264 (libx264)) Press [q] to stop, [?] for help Killed success
最新发布
09-04
(unsloth) root@autodl-container-578c43a3a7-817faf39:~/autodl-tmp/llm_train/src# vllm serve ./ckpts/qwen3-0.6b --enable-auto-tool-choice --tool-call-parser hermes INFO 07-31 23:06:38 [__init__.py:235] Automatically detected platform cuda. INFO 07-31 23:06:41 [api_server.py:1755] vLLM API server version 0.10.0 INFO 07-31 23:06:41 [cli_args.py:261] non-default args: {'model_tag': './ckpts/qwen3-0.6b', 'enable_auto_tool_choice': True, 'tool_call_parser': 'hermes', 'model': './ckpts/qwen3-0.6b'} INFO 07-31 23:06:49 [config.py:1604] Using max model len 40960 WARNING 07-31 23:06:49 [config.py:1084] bitsandbytes quantization is not fully optimized yet. The speed can be slower than non-quantized models. INFO 07-31 23:06:49 [config.py:2434] Chunked prefill is enabled with max_num_batched_tokens=2048. INFO 07-31 23:06:54 [__init__.py:235] Automatically detected platform cuda. INFO 07-31 23:06:57 [core.py:572] Waiting for init message from front-end. INFO 07-31 23:06:57 [core.py:71] Initializing a V1 LLM engine (v0.10.0) with config: model='./ckpts/qwen3-0.6b', speculative_config=None, tokenizer='./ckpts/qwen3-0.6b', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=LoadFormat.BITSANDBYTES, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=./ckpts/qwen3-0.6b, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null} INFO 07-31 23:06:58 [parallel_state.py:1102] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0 WARNING 07-31 23:06:58 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. INFO 07-31 23:06:58 [gpu_model_runner.py:1843] Starting to load model ./ckpts/qwen3-0.6b... INFO 07-31 23:06:58 [gpu_model_runner.py:1875] Loading model from scratch... INFO 07-31 23:06:58 [cuda.py:290] Using Flash Attention backend on V1 engine. INFO 07-31 23:06:58 [bitsandbytes_loader.py:733] Loading weights with BitsAndBytes quantization. May take a while ... Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s] Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 31.45it/s] Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s] ERROR 07-31 23:06:59 [core.py:632] EngineCore failed to start. ERROR 07-31 23:06:59 [core.py:632] Traceback (most recent call last): ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 623, in run_engine_core ERROR 07-31 23:06:59 [core.py:632] engine_core = EngineCoreProc(*args, **kwargs) ERROR 07-31 23:06:59 [core.py:632] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 441, in __init__ ERROR 07-31 23:06:59 [core.py:632] super().__init__(vllm_config, executor_class, log_stats, ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 77, in __init__ ERROR 07-31 23:06:59 [core.py:632] self.model_executor = executor_class(vllm_config) ERROR 07-31 23:06:59 [core.py:632] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 53, in __init__ ERROR 07-31 23:06:59 [core.py:632] self._init_executor() ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 49, in _init_executor ERROR 07-31 23:06:59 [core.py:632] self.collective_rpc("load_model") ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 58, in collective_rpc ERROR 07-31 23:06:59 [core.py:632] answer = run_method(self.driver_worker, method, args, kwargs) ERROR 07-31 23:06:59 [core.py:632] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/utils/__init__.py", line 2985, in run_method ERROR 07-31 23:06:59 [core.py:632] return func(*args, **kwargs) ERROR 07-31 23:06:59 [core.py:632] ^^^^^^^^^^^^^^^^^^^^^ ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 201, in load_model ERROR 07-31 23:06:59 [core.py:632] self.model_runner.load_model(eep_scale_up=eep_scale_up) ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1876, in load_model ERROR 07-31 23:06:59 [core.py:632] self.model = model_loader.load_model( ERROR 07-31 23:06:59 [core.py:632] ^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/model_loader/base_loader.py", line 49, in load_model ERROR 07-31 23:06:59 [core.py:632] self.load_weights(model, model_config) ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/model_loader/bitsandbytes_loader.py", line 741, in load_weights ERROR 07-31 23:06:59 [core.py:632] loaded_weights = model.load_weights(qweight_iterator) ERROR 07-31 23:06:59 [core.py:632] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/models/qwen3.py", line 321, in load_weights ERROR 07-31 23:06:59 [core.py:632] return loader.load_weights(weights) ERROR 07-31 23:06:59 [core.py:632] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 291, in load_weights ERROR 07-31 23:06:59 [core.py:632] autoloaded_weights = set(self._load_module("", self.module, weights)) ERROR 07-31 23:06:59 [core.py:632] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 249, in _load_module ERROR 07-31 23:06:59 [core.py:632] yield from self._load_module(prefix, ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 222, in _load_module ERROR 07-31 23:06:59 [core.py:632] loaded_params = module_load_weights(weights) ERROR 07-31 23:06:59 [core.py:632] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 404, in load_weights ERROR 07-31 23:06:59 [core.py:632] weight_loader(param, loaded_weight, shard_id) ERROR 07-31 23:06:59 [core.py:632] File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/layers/linear.py", line 806, in weight_loader ERROR 07-31 23:06:59 [core.py:632] assert param_data.shape == loaded_weight.shape ERROR 07-31 23:06:59 [core.py:632] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-31 23:06:59 [core.py:632] AssertionError Process EngineCore_0: Traceback (most recent call last): File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 636, in run_engine_core raise e File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 623, in run_engine_core engine_core = EngineCoreProc(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 441, in __init__ super().__init__(vllm_config, executor_class, log_stats, File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 77, in __init__ self.model_executor = executor_class(vllm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 53, in __init__ self._init_executor() File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 49, in _init_executor self.collective_rpc("load_model") File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 58, in collective_rpc answer = run_method(self.driver_worker, method, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/utils/__init__.py", line 2985, in run_method return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 201, in load_model self.model_runner.load_model(eep_scale_up=eep_scale_up) File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1876, in load_model self.model = model_loader.load_model( ^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/model_loader/base_loader.py", line 49, in load_model self.load_weights(model, model_config) File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/model_loader/bitsandbytes_loader.py", line 741, in load_weights loaded_weights = model.load_weights(qweight_iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/models/qwen3.py", line 321, in load_weights return loader.load_weights(weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 291, in load_weights autoloaded_weights = set(self._load_module("", self.module, weights)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 249, in _load_module yield from self._load_module(prefix, File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 222, in _load_module loaded_params = module_load_weights(weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 404, in load_weights weight_loader(param, loaded_weight, shard_id) File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/model_executor/layers/linear.py", line 806, in weight_loader assert param_data.shape == loaded_weight.shape ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s] [rank0]:[W731 23:07:00.371522631 ProcessGroupNCCL.cpp:1479] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) Traceback (most recent call last): File "/root/autodl-tmp/miniconda3/envs/unsloth/bin/vllm", line 8, in <module> sys.exit(main()) ^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/entrypoints/cli/main.py", line 54, in main args.dispatch_function(args) File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/entrypoints/cli/serve.py", line 52, in cmd uvloop.run(run_server(args)) File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/uvloop/__init__.py", line 105, in run return runner.run(wrapper()) ^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/uvloop/__init__.py", line 61, in wrapper return await main ^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 1791, in run_server await run_server_worker(listen_address, sock, args, **uvicorn_kwargs) File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 1811, in run_server_worker async with build_async_engine_client(args, client_config) as engine_client: File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/contextlib.py", line 210, in __aenter__ return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 158, in build_async_engine_client async with build_async_engine_client_from_engine_args( File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/contextlib.py", line 210, in __aenter__ return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 194, in build_async_engine_client_from_engine_args async_llm = AsyncLLM.from_vllm_config( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/async_llm.py", line 163, in from_vllm_config return cls( ^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/async_llm.py", line 117, in __init__ self.engine_core = EngineCoreClient.make_async_mp_client( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/core_client.py", line 98, in make_async_mp_client return AsyncMPClient(*client_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/core_client.py", line 677, in __init__ super().__init__( File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/core_client.py", line 408, in __init__ with launch_core_engines(vllm_config, executor_class, File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/contextlib.py", line 144, in __exit__ next(self.gen) File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/utils.py", line 697, in launch_core_engines wait_for_engine_startup( File "/root/autodl-tmp/miniconda3/envs/unsloth/lib/python3.11/site-packages/vllm/v1/engine/utils.py", line 750, in wait_for_engine_startup raise RuntimeError("Engine core initialization failed. " RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}
08-01
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值