docker run 命令如下:
docker run -itd --shm-size=200g --name MiniCPM-V-4_5-int4 --gpus='"device=0"' -p 30293:8000 -v /data/model:/data/model vllm/vllm-openai:v0.10.1.1 --model /data/model --dtype=half --trust_remote_code --tensor-parallel-size 1 --served-model-name minicpm-int4
模型权重地址:
MiniCPM-V-4_5-int4 · 模型库
https://modelscope.cn/models/OpenBMB/MiniCPM-V-4_5-int4Vllm镜像版本:
vllm/vllm-openai - Docker Image | Docker Hub
https://hub.docker.com/r/vllm/vllm-openai/tags
docker pull vllm/vllm-openai:v0.10.1.1
模型报错的日志:
INFO 08-31 22:44:57 [__init__.py:241] Automatically detected platform cuda.
(APIServer pid=1) INFO 08-31 22:44:59 [api_server.py:1805] vLLM API server version 0.10.1.1
(APIServer pid=1) INFO 08-31 22:44:59 [utils.py:326] non-default args: {'model': '/data/model', 'trust_remote_code': True, 'dtype': 'half', 'served_model_name': ['minicpm-int4']}
(APIServer pid=1) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
(APIServer pid=1) INFO 08-31 22:45:08 [__init__.py:711] Resolved architecture: MiniCPMV
(APIServer pid=1) WARNING 08-31 22:45:08 [__init__.py:2819] Casting torch.bfloat16 to torch.float16.
(APIServer pid=1) INFO 08-31 22:45:08 [__init__.py:1750] Using max model len 40960
(APIServer pid=1) WARNING 08-31 22:45:09 [__init__.py:1171] bitsandbytes quantization is not fully optimized yet. The speed can be slower than non-quantized models.
(APIServer pid=1) INFO 08-31 22:45:09 [scheduler.py:222] Chunked prefill is enabled with max_num_batched_tokens=2048.
INFO 08-31 22:45:15 [__init__.py:241] Automatically detected platform cuda.
(EngineCore_0 pid=39) INFO 08-31 22:45:17 [core.py:636] Waiting for init message from front-end.
(EngineCore_0 pid=39) INFO 08-31 22:45:17 [core.py:74] Initializing a V1 LLM engine (v0.10.1.1) with config: model='/data/model', speculative_config=None, tokenizer='/data/model', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float16, max_seq_len=40960, download_dir=None, load_format=bitsandbytes, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=minicpm-int4, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_d
Vllm服务化失败问题分析

最低0.47元/天 解锁文章
3万+

被折叠的 条评论
为什么被折叠?



