multisg & P2SH

本文介绍比特币复杂交易类型,帮助理解“可编程加密货币”。付款到多重签名需客户用特制钱包生成含复杂脚本的交易,且付款方要付更多交易费。付款到脚本哈希(P2SH)可避免付款方了解锁定脚本细节,让付款到复杂脚本更简单,还提及网络默认传播标准交易类型。

转载

了解复杂的交易类型,能帮助你更好的理解,什么是“可编程加密货币”。

付款到多重签名

如果你看过OP_CHECKMULTISIG操作码的说明,你会发现:交易允许我们将 UTXO 锁定到 NN 字节

如果一个公司使用多签收款,他必须提前把脚本内容发送给所有客户。同时还要求他的客户,能使用特制的“钱包”软件,生成包含复杂脚本的交易,完成支付。并且,这个锁定脚本的内容非常长。

大多数比特币交易都包含交易费,这是对“维护比特币网络安全的人”的一种激励和补偿。

你现在需要知道,应付交易费的多少,只与交易数据序列化后的大小有关,而与这笔交易转移价值的多少无关

所以 old-style multisig 还需要交易付款方,支付更多的交易费。

付款到脚本哈希

复杂的支付脚本功能强大,但在使用时有诸多不便,因为付款方需要了解锁定脚本(由收款方定义)的全部细节。

为了避免这种问题,BIP-16 提案引入了全新的交易类型,允许我们将 UTXO 锁定到一个脚本的哈希(数据指纹)上。

如果锁定脚本中关联的是某个脚本的哈希

复制
OP_HASH160 [脚本的哈希] OP_EQUAL

我们称这笔交易,是一笔付款到脚本哈希(P2SH,Pay to Script Hash)的交易。

P2SH 的含义是,向与该哈希匹配的脚本支付,这个脚本被称为赎回脚本(Redeem Script),其内容将在之后,在消费这个 UTXO 时呈现。

When a transaction attempting to spend the UTXO is presented later, it must contain the script that matches the hash, in addition to the unlocking script. In simple terms, P2SH means “pay to a script matching this hash, a script that will be presented later when this output is spent.”

支付时需要提供的解锁脚本,形如

复制
[参数1] [参数2] ... [参数X] [赎回脚本的内容]

验证分两部分,先计算赎回脚本的哈希,看它是否跟锁定脚本中的脚本哈希一致。

复制
[赎回脚本的内容] OP_HASH160 [脚本的哈希] OP_EQUAL

如果一致,再执行解锁脚本。

复制
[参数1] [参数2] ... [参数X] [赎回脚本的内容]

有点不太好理解。

Imgur

交易d957651a876addc3a4e836c0f55d3e288230c9622f7062a9c1d963480768726e是一笔 P2SH,有 2 个输出。

Imgur

第一个输出,被锁定到一个脚本哈希上,值为8bd55244e4f86fb631e908f8cd9d9084e6744ad1

这个 UTXO 在交易536749e6a0cb146287ec1ceffe50a65c3760d794aacb40367239cb3f332c6ba5中消费,是这笔交易的第一个输入,提供的解锁脚本为

Imgur

解码一下。

复制
0
3045022100cd2eff6b93874c822c5496a2fd660f3f0a09e8dc40e504b14a5fbd38bcfff4db02205df3ad1e6a28e762b471012ee0b1e067cb39bbe4a3494c1144b157ccef25bc71[ALL]
3045022100ea4631ed3e9ae30f4faaa17b396398b30959bd119558349e4aa40ecb75856c0e0220684be10d059339533d225cd21e75079dc771e10ebe8f0358db3ec18763e34f22[ALL]
522102429adbb84a4a0f14b31c14f4927418207bcef7f70eb97b1caed49160733bff6921026ce3c7280d473b7a9eab8fe76219687deb646c1619ad18902d19dc3148e7f8ae2103e051dd3573daa05964487c93fe5a5b37b76fe94729c8c2b372845f5d85e0722c53ae

是如下的形式。

复制
0 [签名1] [签名2] [赎回脚本的内容]

赎回脚本为

复制
522102429adbb84a4a0f14b31c14f4927418207bcef7f70eb97b1caed49160733bff6921026ce3c7280d473b7a9eab8fe76219687deb646c1619ad18902d19dc3148e7f8ae2103e051dd3573daa05964487c93fe5a5b37b76fe94729c8c2b372845f5d85e0722c53ae

你可以用这个工具,计算赎回脚本 HASH160 的,看看是不是8bd55244e4f86fb631e908f8cd9d9084e6744ad1

对照操作码说明书,翻译赎回脚本的内容。

复制
52  <== 2
21 <== 接下来的0x21字节是数据
02429adbb84a4a0f14b31c14f4927418207bcef7f70eb97b1caed49160733bff69 <== 33字节的数据
21 <== 接下来的0x21字节是数据
026ce3c7280d473b7a9eab8fe76219687deb646c1619ad18902d19dc3148e7f8ae <== 33字节的数据
21 <== 接下来的0x21字节是数据
03e051dd3573daa05964487c93fe5a5b37b76fe94729c8c2b372845f5d85e0722c <== 33字节的数据
53 <== 3
ae <== OP_CHECKMULTISIG

是不是有点眼熟?这个赎回脚本,正是

复制
2 [公钥1] [公钥2] [公钥3] 3 OP_CHECKMULTISIG

解码后的解锁脚本为

复制
 0 [签名1] [签名2]  2 [公钥1] [公钥2] [公钥3] 3 OP_CHECKMULTISIG

|<------参数----->|<-----------------赎回脚本------------------>|

你能看到,

A P2SH transaction locks the output to this hash instead of the longer redeem script

P2SH 将输出锁定到脚本哈希,而不是锁定到特别长的具体脚本。

Instead of “pay to this 5-key multisignature script,” the P2SH equivalent transaction is “pay to a script with this hash.”

用 P2SH 实现多签,只需要告诉付款方赎回脚本的哈希,取代“向 N 个多重签名的具体脚本支付”,等同于“向有该哈希值的脚本支付”。

P2SH 让付款到复杂脚本变得跟 P2PKH 一样简单。

  • 向付款方提供脚本哈希,就像 P2PKH 需要公钥哈希一样
  • 赎回脚本的内容,从锁定脚本转移到了解锁脚本中,更多的交易费也从发送方转移到收款方

目前网络中的大部分 P2SH 交易都是多签交易,但 P2SH 拥有更广泛的可能性,你可以在 P2SH 的赎回脚本中充分发挥想象力。

One more thing

P2PK、P2PKH、old-style multisig 和 P2SH 都是网络支持的标准交易类型。

需要注意的是,考虑到安全因素,比特币网络默认只传播(relay)这些标准交易。

Note that there is a small number of standard script forms that are relayed from node to node; non-standard scripts are accepted if they are in a block, but nodes will not relay them.

在尝试交易脚本的无限可能前,你应该先读一读源码中的 policy 部分。

参考

还是报错, nvidia-installer log file &#39;/var/log/nvidia-installer.log&#39; creation time: Mon Nov 10 13:26:24 2025 installer version: 560.35.03 PATH: /root/.local/bin:/root/bin:/usr/share/Modules/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin nvidia-installer command line: ./nvidia-installer --no-opengl-files Using: nvidia-installer ncurses v6 user interface -&gt; Detected 48 CPUs online; setting concurrency level to 32. -&gt; Scanning the initramfs with lsinitrd... -&gt; /usr/bin/lsinitrd requires a file path argument, but none was given. -&gt; Executing: /usr/bin/lsinitrd /boot/initramfs-5.14.0-570.12.1.el9_6.x86_64.img -&gt; Nouveau detected in initramfs -&gt; Initramfs scan complete. -&gt; Multiple kernel module types are available for this system. Which would you like to use? (Answer: NVIDIA Proprietary) -&gt; Installing NVIDIA driver version 560.35.03. -&gt; Performing CC sanity check with CC=&quot;/usr/bin/cc&quot;. -&gt; Performing CC check. -&gt; Kernel source path: &#39;/lib/modules/5.14.0-570.12.1.el9_6.x86_64/source&#39; -&gt; Kernel output path: &#39;/lib/modules/5.14.0-570.12.1.el9_6.x86_64/build&#39; -&gt; Performing Compiler check. -&gt; Performing Dom0 check. -&gt; Performing Xen check. -&gt; Performing PREEMPT_RT check. -&gt; Performing vgpu_kvm check. -&gt; Cleaning kernel module build directory. executing: &#39;cd kernel; /usr/bin/make -k -j32 NV_EXCLUDE_KERNEL_MODULES=&quot;&quot; SYSSRC=&quot;/lib/modules/5.14.0-570.12.1.el9_6.x86_64/source&quot; SYSOUT=&quot;/lib/modules/5.14.0-570.12.1.el9_6.x86_64/build&quot; clean&#39;... rm -f -r conftest make[1]: Entering directory &#39;/usr/src/kernels/5.14.0-570.12.1.el9_6.x86_64&#39; make[1]: Leaving directory &#39;/usr/src/kernels/5.14.0-570.12.1.el9_6.x86_64&#39; -&gt; Building kernel modules executing: &#39;cd kernel; /usr/bin/make -k -j32 NV_EXCLUDE_KERNEL_MODULES=&quot;&quot; SYSSRC=&quot;/lib/modules/5.14.0-570.12.1.el9_6.x86_64/source&quot; SYSOUT=&quot;/lib/modules/5.14.0-570.12.1.el9_6.x86_64/build&quot; &#39;... make[1]: Entering directory &#39;/usr/src/kernels/5.14.0-570.12.1.el9_6.x86_64&#39; SYMLINK /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-kernel.o SYMLINK /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset/nv-modeset-kernel.o CONFTEST: hash__remap_4k_pfn CONFTEST: set_pages_uc CONFTEST: list_is_first CONFTEST: set_memory_uc CONFTEST: set_memory_array_uc CONFTEST: set_pages_array_uc CONFTEST: ioremap_cache CONFTEST: ioremap_wc CONFTEST: ioremap_driver_hardened CONFTEST: ioremap_driver_hardened_wc CONFTEST: ioremap_cache_shared CONFTEST: pci_get_domain_bus_and_slot CONFTEST: get_num_physpages CONFTEST: pde_data CONFTEST: xen_ioemu_inject_msi CONFTEST: phys_to_dma CONFTEST: get_dma_ops CONFTEST: dma_attr_macros CONFTEST: dma_map_page_attrs CONFTEST: write_cr4 CONFTEST: of_find_node_by_phandle CONFTEST: of_node_to_nid CONFTEST: pnv_pci_get_npu_dev CONFTEST: of_get_ibm_chip_id CONFTEST: pci_bus_address CONFTEST: pci_stop_and_remove_bus_device CONFTEST: pci_rebar_get_possible_sizes CONFTEST: wait_for_random_bytes CONFTEST: register_cpu_notifier CONFTEST: cpuhp_setup_state CONFTEST: dma_map_resource CONFTEST: get_backlight_device_by_name CONFTEST: timer_setup CONFTEST: pci_enable_msix_range CONFTEST: kernel_read_has_pointer_pos_arg CONFTEST: kernel_write_has_pointer_pos_arg CONFTEST: dma_direct_map_resource CONFTEST: tegra_get_platform CONFTEST: tegra_bpmp_send_receive CONFTEST: flush_cache_all CONFTEST: vmf_insert_pfn CONFTEST: jiffies_to_timespec CONFTEST: ktime_get_raw_ts64 CONFTEST: ktime_get_real_ts64 CONFTEST: full_name_hash CONFTEST: pci_enable_atomic_ops_to_root CONFTEST: vga_tryget CONFTEST: cc_platform_has CONFTEST: seq_read_iter CONFTEST: follow_pfn CONFTEST: drm_gem_object_get CONFTEST: drm_gem_object_put_unlocked CONFTEST: add_memory_driver_managed CONFTEST: device_property_read_u64 CONFTEST: devm_of_platform_populate CONFTEST: of_dma_configure CONFTEST: of_property_count_elems_of_size CONFTEST: of_property_read_variable_u8_array CONFTEST: of_property_read_variable_u32_array CONFTEST: i2c_new_client_device CONFTEST: i2c_unregister_device CONFTEST: of_get_named_gpio CONFTEST: devm_gpio_request_one CONFTEST: gpio_direction_input CONFTEST: gpio_direction_output CONFTEST: gpio_get_value CONFTEST: gpio_set_value CONFTEST: gpio_to_irq CONFTEST: icc_get CONFTEST: icc_put CONFTEST: icc_set_bw CONFTEST: dma_buf_export_args CONFTEST: dma_buf_ops_has_kmap CONFTEST: dma_buf_ops_has_kmap_atomic CONFTEST: dma_buf_ops_has_map CONFTEST: dma_buf_ops_has_map_atomic CONFTEST: dma_buf_has_dynamic_attachment CONFTEST: dma_buf_attachment_has_peer2peer CONFTEST: dma_set_mask_and_coherent CONFTEST: devm_clk_bulk_get_all CONFTEST: get_task_ioprio CONFTEST: mdev_set_iommu_device CONFTEST: offline_and_remove_memory CONFTEST: stack_trace CONFTEST: crypto_tfm_ctx_aligned CONFTEST: wait_on_bit_lock_argument_count CONFTEST: radix_tree_empty CONFTEST: radix_tree_replace_slot CONFTEST: pnv_npu2_init_context CONFTEST: cpumask_of_node CONFTEST: ioasid_get CONFTEST: mm_pasid_drop CONFTEST: mmget_not_zero CONFTEST: mmgrab CONFTEST: iommu_sva_bind_device_has_drvdata_arg CONFTEST: vm_fault_to_errno CONFTEST: find_next_bit_wrap CONFTEST: iommu_is_dma_domain CONFTEST: acpi_video_backlight_use_native CONFTEST: drm_dev_unref CONFTEST: drm_reinit_primary_mode_group CONFTEST: get_user_pages_remote CONFTEST: get_user_pages CONFTEST: pin_user_pages_remote CONFTEST: pin_user_pages CONFTEST: drm_gem_object_lookup CONFTEST: drm_atomic_state_ref_counting CONFTEST: drm_driver_has_gem_prime_res_obj CONFTEST: drm_atomic_helper_connector_dpms CONFTEST: drm_connector_funcs_have_mode_in_name CONFTEST: drm_connector_has_vrr_capable_property CONFTEST: drm_framebuffer_get CONFTEST: drm_dev_put CONFTEST: drm_format_num_planes CONFTEST: drm_connector_for_each_possible_encoder CONFTEST: drm_rotation_available CONFTEST: drm_vma_offset_exact_lookup_locked CONFTEST: nvhost_dma_fence_unpack CONFTEST: dma_fence_set_error CONFTEST: fence_set_error CONFTEST: sync_file_get_fence CONFTEST: drm_aperture_remove_conflicting_pci_framebuffers CONFTEST: drm_fbdev_generic_setup CONFTEST: drm_connector_attach_hdr_output_metadata_property CONFTEST: drm_helper_crtc_enable_color_mgmt CONFTEST: drm_crtc_enable_color_mgmt CONFTEST: drm_atomic_helper_legacy_gamma_set CONFTEST: is_export_symbol_gpl_of_node_to_nid CONFTEST: is_export_symbol_gpl_sme_active CONFTEST: is_export_symbol_present_swiotlb_map_sg_attrs CONFTEST: is_export_symbol_present_swiotlb_dma_ops CONFTEST: is_export_symbol_present___close_fd CONFTEST: is_export_symbol_present_close_fd CONFTEST: is_export_symbol_present_get_unused_fd CONFTEST: is_export_symbol_present_get_unused_fd_flags CONFTEST: is_export_symbol_present_nvhost_get_default_device CONFTEST: is_export_symbol_present_nvhost_syncpt_unit_interface_get_byte_offset CONFTEST: is_export_symbol_present_nvhost_syncpt_unit_interface_get_aperture CONFTEST: is_export_symbol_present_tegra_dce_register_ipc_client CONFTEST: is_export_symbol_present_tegra_dce_unregister_ipc_client CONFTEST: is_export_symbol_present_tegra_dce_client_ipc_send_recv CONFTEST: is_export_symbol_present_dram_clk_to_mc_clk CONFTEST: is_export_symbol_present_get_dram_num_channels CONFTEST: is_export_symbol_present_tegra_dram_types CONFTEST: is_export_symbol_present_pxm_to_node CONFTEST: is_export_symbol_present_screen_info CONFTEST: is_export_symbol_gpl_screen_info CONFTEST: is_export_symbol_present_i2c_bus_status CONFTEST: is_export_symbol_present_tegra_fuse_control_read CONFTEST: is_export_symbol_present_tegra_get_platform CONFTEST: is_export_symbol_present_pci_find_host_bridge CONFTEST: is_export_symbol_present_tsec_comms_send_cmd CONFTEST: is_export_symbol_present_tsec_comms_set_init_cb CONFTEST: is_export_symbol_present_tsec_comms_clear_init_cb CONFTEST: is_export_symbol_present_tsec_comms_alloc_mem_from_gscco CONFTEST: is_export_symbol_present_tsec_comms_free_gscco_mem CONFTEST: is_export_symbol_present_memory_block_size_bytes CONFTEST: crypto CONFTEST: is_export_symbol_present_follow_pte CONFTEST: is_export_symbol_present_int_active_memcg CONFTEST: is_export_symbol_present_migrate_vma_setup CONFTEST: dma_ops CONFTEST: swiotlb_dma_ops CONFTEST: noncoherent_swiotlb_dma_ops CONFTEST: vm_fault_has_address CONFTEST: vm_insert_pfn_prot CONFTEST: vmf_insert_pfn_prot CONFTEST: vm_ops_fault_removed_vma_arg CONFTEST: kmem_cache_has_kobj_remove_work CONFTEST: sysfs_slab_unlink CONFTEST: proc_ops CONFTEST: timespec64 CONFTEST: vmalloc_has_pgprot_t_arg CONFTEST: mm_has_mmap_lock CONFTEST: pci_channel_state CONFTEST: pci_dev_has_ats_enabled CONFTEST: remove_memory_has_nid_arg CONFTEST: add_memory_driver_managed_has_mhp_flags_arg CONFTEST: num_registered_fb CONFTEST: pci_driver_has_driver_managed_dma CONFTEST: vm_area_struct_has_const_vm_flags CONFTEST: memory_failure_has_trapno_arg CONFTEST: foll_longterm_present CONFTEST: bus_type_has_iommu_ops CONFTEST: backing_dev_info CONFTEST: mm_context_t CONFTEST: vm_fault_t CONFTEST: mmu_notifier_ops_invalidate_range CONFTEST: mmu_notifier_ops_arch_invalidate_secondary_tlbs CONFTEST: migrate_vma_added_flags CONFTEST: migrate_device_range CONFTEST: handle_mm_fault_has_mm_arg CONFTEST: handle_mm_fault_has_pt_regs_arg CONFTEST: mempolicy_has_unified_nodes CONFTEST: mempolicy_has_home_node CONFTEST: mpol_preferred_many_present CONFTEST: mmu_interval_notifier CONFTEST: fault_flag_remote_present CONFTEST: drm_bus_present CONFTEST: drm_bus_has_bus_type CONFTEST: drm_bus_has_get_irq CONFTEST: drm_bus_has_get_name CONFTEST: drm_driver_has_device_list CONFTEST: drm_driver_has_legacy_dev_list CONFTEST: drm_driver_has_set_busid CONFTEST: drm_crtc_state_has_connectors_changed CONFTEST: drm_init_function_args CONFTEST: drm_helper_mode_fill_fb_struct CONFTEST: drm_master_drop_has_from_release_arg CONFTEST: drm_driver_unload_has_int_return_type CONFTEST: drm_atomic_helper_crtc_destroy_state_has_crtc_arg CONFTEST: drm_atomic_helper_plane_destroy_state_has_plane_arg CONFTEST: drm_mode_object_find_has_file_priv_arg CONFTEST: dma_buf_owner CONFTEST: drm_connector_list_iter CONFTEST: drm_atomic_helper_swap_state_has_stall_arg CONFTEST: drm_driver_prime_flag_present CONFTEST: drm_gem_object_has_resv CONFTEST: drm_crtc_state_has_async_flip CONFTEST: drm_crtc_state_has_pageflip_flags CONFTEST: drm_crtc_state_has_vrr_enabled CONFTEST: drm_format_modifiers_present CONFTEST: drm_vma_node_is_allowed_has_tag_arg CONFTEST: drm_vma_offset_node_has_readonly CONFTEST: drm_display_mode_has_vrefresh CONFTEST: drm_driver_master_set_has_int_return_type CONFTEST: drm_driver_has_gem_free_object CONFTEST: drm_prime_pages_to_sg_has_drm_device_arg CONFTEST: drm_driver_has_gem_prime_callbacks CONFTEST: drm_crtc_atomic_check_has_atomic_state_arg CONFTEST: drm_gem_object_vmap_has_map_arg CONFTEST: drm_plane_atomic_check_has_atomic_state_arg CONFTEST: drm_device_has_pdev CONFTEST: drm_crtc_state_has_no_vblank CONFTEST: drm_mode_config_has_allow_fb_modifiers CONFTEST: drm_has_hdr_output_metadata CONFTEST: dma_resv_add_fence CONFTEST: dma_resv_reserve_fences CONFTEST: reservation_object_reserve_shared_has_num_fences_arg CONFTEST: drm_connector_has_override_edid CONFTEST: drm_master_has_leases CONFTEST: drm_file_get_master CONFTEST: drm_modeset_lock_all_end CONFTEST: drm_connector_lookup CONFTEST: drm_connector_put CONFTEST: drm_driver_has_dumb_destroy CONFTEST: fence_ops_use_64bit_seqno CONFTEST: drm_aperture_remove_conflicting_pci_framebuffers_has_driver_arg CONFTEST: drm_mode_create_dp_colorspace_property_has_supported_colorspaces_arg CONFTEST: drm_syncobj_features_present CONFTEST: drm_unlocked_ioctl_flag_present CONFTEST: dom0_kernel_present CONFTEST: nvidia_vgpu_kvm_build CONFTEST: nvidia_grid_build CONFTEST: nvidia_grid_csp_build CONFTEST: pm_runtime_available CONFTEST: pci_class_multimedia_hd_audio CONFTEST: drm_available CONFTEST: vfio_pci_core_available CONFTEST: mdev_available CONFTEST: cmd_uphy_display_port_init CONFTEST: cmd_uphy_display_port_off CONFTEST: memory_failure_mf_sw_simulated_defined CONFTEST: drm_atomic_available CONFTEST: is_export_symbol_gpl_refcount_inc CONFTEST: is_export_symbol_gpl_refcount_dec_and_test CONFTEST: drm_alpha_blending_available CONFTEST: is_export_symbol_present_drm_gem_prime_fd_to_handle CONFTEST: is_export_symbol_present_drm_gem_prime_handle_to_fd CONFTEST: ib_peer_memory_symbols CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-pci.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-dmabuf.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-nano-timer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-acpi.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-cray.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-dma.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-i2c.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-mmap.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-p2p.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-pat.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-procfs.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-usermap.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-vm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-vtophys.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-interface.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-mlock.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-pci.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-registry.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-usermap.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-modeset-interface.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-pci-table.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-kthread-q.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-memdbg.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-ibmnpu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-report-err.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-rsync.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-msi.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-caps.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-caps-imex.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-host1x.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv_uvm_interface.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_aead.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_ecc.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_hkdf.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_rand.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_shash.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_rsa.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_aead_aes_gcm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_sha.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_hmac_sha.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_internal_crypt_lib.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_hkdf_sha.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_ec.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_x509.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_rsa_ext.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nvlink_linux.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nvlink_caps.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/linux_nvswitch.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/procfs_nvswitch.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/i2c_nvswitch.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ats_sva.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_conf_computing.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_sec2_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell_sec2.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hopper_sec2.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_blackwell.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_blackwell_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_blackwell_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_blackwell_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_common.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_linux.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/nvstatus.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/nvCpuUuid.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/nv-kthread-q.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/nv-kthread-q-selftest.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_tools.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_global.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu_isr.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_procfs.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_va_space.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_va_space_mm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu_semaphore.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_mem.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_rm_mem.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_channel.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_lock.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hal.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_processors.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_range_tree.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_rb_tree.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_range_allocator.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_va_range.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_va_policy.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_va_block.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_range_group.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu_replayable_faults.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu_non_replayable_faults.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu_access_counters.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_events.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_module.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pte_batch.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_tlb_batch.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_push.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pushbuffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_thread_context.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_tracker.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell_ce.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell_access_counter_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pascal.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pascal_ce.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pascal_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pascal_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pascal_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_volta_ce.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_volta_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_volta_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_volta.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_volta_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_volta_access_counter_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_turing.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_turing_access_counter_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_turing_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_turing_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_turing_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ampere.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ampere_ce.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ampere_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ampere_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ampere_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hopper.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hopper_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hopper_ce.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hopper_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hopper_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ada.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_policy.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_utils.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_kvmalloc.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pmm_sysmem.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pmm_gpu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_migrate.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_populate_pageable.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_migrate_pageable.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_map_external.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_user_channel.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hmm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_heuristics.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_thrashing.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_prefetch.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ats.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ats_ibm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ats_faults.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_test_rng.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_range_tree_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_range_allocator_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu_semaphore_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_mem_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_rm_mem_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_page_tree_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_tracker_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_push_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_channel_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ce_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_host_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_lock_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_utils_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_kvmalloc_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pmm_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pmm_sysmem_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_events_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_module_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_get_rm_ptes_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_fault_buffer_flush_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_peer_identity_mappings_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_va_block_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_range_group_tree_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_thread_context_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_rb_tree_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset/nvidia-modeset-linux.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset/nv-kthread-q.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-drv.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-utils.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-crtc.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-encoder.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-connector.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-gem.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-fb.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-modeset.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-fence.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-helper.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nv-kthread-q.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-drv.c:207:6: error: &#39;const struct drm_mode_config_funcs&#39; has no member named &#39;output_poll_changed&#39; 207 | .output_poll_changed = nv_drm_output_poll_changed, | ^~~~~~~~~~~~~~~~~~~ /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-drv.c:207:28: error: initialization of &#39;struct drm_atomic_state * (*)(struct drm_device *)&#39; from incompatible pointer type &#39;void (*)(struct drm_device *)&#39; [-Werror=incompatible-pointer-types] 207 | .output_poll_changed = nv_drm_output_poll_changed, | ^~~~~~~~~~~~~~~~~~~~~~~~~~ /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-drv.c:207:28: note: (near initialization for &#39;nv_mode_config_funcs.atomic_state_alloc&#39;) cc1: some warnings being treated as errors CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nv-pci-table.o make[2]: *** [scripts/Makefile.build:249: /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-drv.o] Error 1 CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-gem-nvkms-memory.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-gem-user-memory.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-gem-dma-buf.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-format.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-os-interface.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-linux.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-peermem/nvidia-peermem.o LD [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia.o ld -r -o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-interface.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-pci.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-dmabuf.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-nano-timer.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-acpi.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-cray.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-dma.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-i2c.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-mmap.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-p2p.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-pat. o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-procfs.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-usermap.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-vm.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-vtophys.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-interface.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-mlock.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-pci.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-registry.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-usermap.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-modeset-interface.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-pci-table.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-kthread-q.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-memdbg.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nv idia/nv-ibmnpu.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-report-err.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-rsync.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-msi.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-caps.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-caps-imex.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-host1x.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv_uvm_interface.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_aead.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_ecc.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_hkdf.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_rand.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_shash.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_rsa.o /tmp/selfgz29405/NVIDIA-Linux-x86 _64-560.35.03/kernel/nvidia/libspdm_aead_aes_gcm.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_sha.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_hmac_sha.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_internal_crypt_lib.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_hkdf_sha.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_ec.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_x509.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_rsa_ext.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nvlink_linux.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nvlink_caps.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/linux_nvswitch.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/procfs_nvswitch.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/i2c_nvswitch.o LD [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-peermem.o ld -r -o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset/nv-modeset-interface.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset/nvidia-modeset-linux.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset/nv-kthread-q.o LD [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset.o LD [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm.o make[2]: Target &#39;__build&#39; not remade because of errors. make[1]: *** [Makefile:1947: /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel] Error 2 make[1]: Target &#39;modules&#39; not remade because of errors. make[1]: Leaving directory &#39;/usr/src/kernels/5.14.0-570.12.1.el9_6.x86_64&#39; make: *** [Makefile:89: modules] Error 2 -&gt; Error.
11-11
INFO 09-12 08:36:18 __init__.py:207] Automatically detected platform cuda. INFO 09-12 08:36:18 api_server.py:912] vLLM API server version 0.7.3 INFO 09-12 08:36:18 api_server.py:913] args: Namespace(host=None, port=8003, uvicorn_log_level=&#39;info&#39;, allow_credentials=False, allowed_origins=[&#39;*&#39;], allowed_methods=[&#39;*&#39;], allowed_headers=[&#39;*&#39;], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format=&#39;auto&#39;, response_role=&#39;assistant&#39;, ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, enable_reasoning=False, reasoning_parser=None, tool_call_parser=None, tool_parser_plugin=&#39;&#39;, model=&#39;/models/DeepSeek-R1-Distill-Llama-70B&#39;, task=&#39;auto&#39;, tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode=&#39;auto&#39;, trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format=&#39;auto&#39;, config_format=&lt;ConfigFormat.AUTO: &#39;auto&#39;&gt;, dtype=&#39;auto&#39;, kv_cache_dtype=&#39;auto&#39;, max_model_len=84320, guided_decoding_backend=&#39;xgrammar&#39;, logits_processor_pattern=None, model_impl=&#39;auto&#39;, distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=4, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=512, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=1, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=True, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type=&#39;ray&#39;, tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=True, enable_lora_bias=False, max_loras=1, max_lora_rank=8, lora_extra_vocab_size=256, lora_dtype=&#39;auto&#39;, long_lora_scaling_factors=None, max_cpu_loras=4, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device=&#39;auto&#39;, num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=True, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method=&#39;rejection_sampler&#39;, typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=[&#39;DeepSeek-R1&#39;], qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy=&#39;fcfs&#39;, scheduler_cls=&#39;vllm.core.scheduler.Scheduler&#39;, override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls=&#39;auto&#39;, generation_config=None, override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False) INFO 09-12 08:36:18 api_server.py:209] Started engine process with PID 76 INFO 09-12 08:36:22 __init__.py:207] Automatically detected platform cuda. INFO 09-12 08:36:24 config.py:549] This model supports multiple tasks: {&#39;generate&#39;, &#39;classify&#39;, &#39;embed&#39;, &#39;score&#39;, &#39;reward&#39;}. Defaulting to &#39;generate&#39;. INFO 09-12 08:36:24 config.py:1382] Defaulting to use mp for distributed inference INFO 09-12 08:36:24 config.py:1555] Chunked prefill is enabled with max_num_batched_tokens=512. WARNING 09-12 08:36:24 cuda.py:95] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used WARNING 09-12 08:36:24 config.py:685] Async output processing is not supported on the current platform type cuda. WARNING 09-12 08:36:24 config.py:2224] LoRA with chunked prefill is still experimental and may be unstable. INFO 09-12 08:36:27 config.py:549] This model supports multiple tasks: {&#39;generate&#39;, &#39;classify&#39;, &#39;score&#39;, &#39;reward&#39;, &#39;embed&#39;}. Defaulting to &#39;generate&#39;. INFO 09-12 08:36:27 config.py:1382] Defaulting to use mp for distributed inference INFO 09-12 08:36:27 config.py:1555] Chunked prefill is enabled with max_num_batched_tokens=512. WARNING 09-12 08:36:27 cuda.py:95] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used WARNING 09-12 08:36:27 config.py:685] Async output processing is not supported on the current platform type cuda. WARNING 09-12 08:36:27 config.py:2224] LoRA with chunked prefill is still experimental and may be unstable. INFO 09-12 08:36:27 llm_engine.py:234] Initializing a V0 LLM engine (v0.7.3) with config: model=&#39;/models/DeepSeek-R1-Distill-Llama-70B&#39;, speculative_config=None, tokenizer=&#39;/models/DeepSeek-R1-Distill-Llama-70B&#39;, skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=84320, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend=&#39;xgrammar&#39;), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=DeepSeek-R1, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=True, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={&quot;splitting_ops&quot;:[],&quot;compile_sizes&quot;:[],&quot;cudagraph_capture_sizes&quot;:[],&quot;max_capture_size&quot;:0}, use_cached_outputs=True, WARNING 09-12 08:36:28 multiproc_worker_utils.py:300] Reducing Torch parallelism from 64 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed. INFO 09-12 08:36:28 custom_cache_manager.py:19] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager (VllmWorkerProcess pid=348) INFO 09-12 08:36:28 multiproc_worker_utils.py:229] Worker ready; awaiting tasks (VllmWorkerProcess pid=349) INFO 09-12 08:36:28 multiproc_worker_utils.py:229] Worker ready; awaiting tasks (VllmWorkerProcess pid=350) INFO 09-12 08:36:28 multiproc_worker_utils.py:229] Worker ready; awaiting tasks INFO 09-12 08:36:29 cuda.py:229] Using Flash Attention backend. (VllmWorkerProcess pid=350) INFO 09-12 08:36:29 cuda.py:229] Using Flash Attention backend. (VllmWorkerProcess pid=348) INFO 09-12 08:36:29 cuda.py:229] Using Flash Attention backend. (VllmWorkerProcess pid=349) INFO 09-12 08:36:29 cuda.py:229] Using Flash Attention backend. (VllmWorkerProcess pid=348) INFO 09-12 08:36:29 utils.py:916] Found nccl from library libnccl.so.2 (VllmWorkerProcess pid=348) INFO 09-12 08:36:29 pynccl.py:69] vLLM is using nccl==2.21.5 INFO 09-12 08:36:29 utils.py:916] Found nccl from library libnccl.so.2 INFO 09-12 08:36:29 pynccl.py:69] vLLM is using nccl==2.21.5 (VllmWorkerProcess pid=349) INFO 09-12 08:36:29 utils.py:916] Found nccl from library libnccl.so.2 (VllmWorkerProcess pid=349) INFO 09-12 08:36:29 pynccl.py:69] vLLM is using nccl==2.21.5 (VllmWorkerProcess pid=350) INFO 09-12 08:36:29 utils.py:916] Found nccl from library libnccl.so.2 (VllmWorkerProcess pid=350) INFO 09-12 08:36:29 pynccl.py:69] vLLM is using nccl==2.21.5 (VllmWorkerProcess pid=350) WARNING 09-12 08:36:29 custom_all_reduce.py:136] Custom allreduce is disabled because it&#39;s not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. (VllmWorkerProcess pid=349) WARNING 09-12 08:36:29 custom_all_reduce.py:136] Custom allreduce is disabled because it&#39;s not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. WARNING 09-12 08:36:29 custom_all_reduce.py:136] Custom allreduce is disabled because it&#39;s not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. (VllmWorkerProcess pid=348) WARNING 09-12 08:36:29 custom_all_reduce.py:136] Custom allreduce is disabled because it&#39;s not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. INFO 09-12 08:36:29 shm_broadcast.py:258] vLLM message queue communication handle: Handle(connect_ip=&#39;127.0.0.1&#39;, local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, &#39;psm_1f0b3033&#39;), local_subscribe_port=41965, remote_subscribe_port=None) INFO 09-12 08:36:29 model_runner.py:1110] Starting to load model /models/DeepSeek-R1-Distill-Llama-70B... (VllmWorkerProcess pid=350) INFO 09-12 08:36:29 model_runner.py:1110] Starting to load model /models/DeepSeek-R1-Distill-Llama-70B... (VllmWorkerProcess pid=348) INFO 09-12 08:36:29 model_runner.py:1110] Starting to load model /models/DeepSeek-R1-Distill-Llama-70B... (VllmWorkerProcess pid=349) INFO 09-12 08:36:29 model_runner.py:1110] Starting to load model /models/DeepSeek-R1-Distill-Llama-70B... (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method load_model. (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] Traceback (most recent call last): (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_worker_utils.py&quot;, line 236, in _run_worker_process (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs) (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/utils.py&quot;, line 2196, in run_method (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return func(*args, **kwargs) (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py&quot;, line 183, in load_model (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.model_runner.load_model() (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py&quot;, line 1112, in load_model (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.model = get_model(vllm_config=self.vllm_config) (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init__.py&quot;, line 14, in get_model (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return loader.load_model(vllm_config=vllm_config) (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py&quot;, line 406, in load_model (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] model = _initialize_model(vllm_config=vllm_config) (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py&quot;, line 125, in _initialize_model (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return model_class(vllm_config=vllm_config, prefix=prefix) (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 496, in __init__ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.model = self._init_model(vllm_config=vllm_config, (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 533, in _init_model (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return LlamaModel(vllm_config=vllm_config, prefix=prefix) (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py&quot;, line 151, in __init__ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 326, in __init__ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.start_layer, self.end_layer, self.layers = make_layers( (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py&quot;, line 558, in make_layers (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] maybe_offload_to_cpu(layer_fn(prefix=f&quot;{prefix}.{idx}&quot;)) (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 328, in &lt;lambda&gt; (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] lambda prefix: layer_type(config=config, (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 254, in __init__ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.mlp = LlamaMLP( (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 70, in __init__ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.gate_up_proj = MergedColumnParallelLinear( (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py&quot;, line 441, in __init__ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] super().__init__(input_size=input_size, (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py&quot;, line 314, in __init__ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.quant_method.create_weights( (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py&quot;, line 129, in create_weights (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] weight = Parameter(torch.empty(sum(output_partition_sizes), (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/torch/utils/_device.py&quot;, line 106, in __torch_function__ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return func(*args, **kwargs) (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=350) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 224.00 MiB. GPU 3 has a total capacity of 23.64 GiB of which 164.81 MiB is free. Process 25379 has 23.47 GiB memory in use. Of the allocated memory 22.91 GiB is allocated by PyTorch, and 24.14 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method load_model. (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] Traceback (most recent call last): (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_worker_utils.py&quot;, line 236, in _run_worker_process (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs) (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/utils.py&quot;, line 2196, in run_method (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return func(*args, **kwargs) (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py&quot;, line 183, in load_model (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.model_runner.load_model() (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py&quot;, line 1112, in load_model (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.model = get_model(vllm_config=self.vllm_config) (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init__.py&quot;, line 14, in get_model (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return loader.load_model(vllm_config=vllm_config) (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py&quot;, line 406, in load_model (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] model = _initialize_model(vllm_config=vllm_config) (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py&quot;, line 125, in _initialize_model (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return model_class(vllm_config=vllm_config, prefix=prefix) (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 496, in __init__ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.model = self._init_model(vllm_config=vllm_config, (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 533, in _init_model (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return LlamaModel(vllm_config=vllm_config, prefix=prefix) (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py&quot;, line 151, in __init__ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 326, in __init__ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.start_layer, self.end_layer, self.layers = make_layers( (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py&quot;, line 558, in make_layers (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] maybe_offload_to_cpu(layer_fn(prefix=f&quot;{prefix}.{idx}&quot;)) (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 328, in &lt;lambda&gt; (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] lambda prefix: layer_type(config=config, (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 254, in __init__ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.mlp = LlamaMLP( (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 70, in __init__ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.gate_up_proj = MergedColumnParallelLinear( (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py&quot;, line 441, in __init__ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] super().__init__(input_size=input_size, (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py&quot;, line 314, in __init__ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.quant_method.create_weights( (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py&quot;, line 129, in create_weights (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] weight = Parameter(torch.empty(sum(output_partition_sizes), (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/torch/utils/_device.py&quot;, line 106, in __torch_function__ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return func(*args, **kwargs) (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=348) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 224.00 MiB. GPU 1 has a total capacity of 23.64 GiB of which 164.81 MiB is free. Process 25377 has 23.47 GiB memory in use. Of the allocated memory 22.91 GiB is allocated by PyTorch, and 24.14 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method load_model. (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] Traceback (most recent call last): (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_worker_utils.py&quot;, line 236, in _run_worker_process (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs) (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/utils.py&quot;, line 2196, in run_method (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return func(*args, **kwargs) (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py&quot;, line 183, in load_model (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.model_runner.load_model() (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py&quot;, line 1112, in load_model (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.model = get_model(vllm_config=self.vllm_config) (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init__.py&quot;, line 14, in get_model (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return loader.load_model(vllm_config=vllm_config) (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py&quot;, line 406, in load_model (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] model = _initialize_model(vllm_config=vllm_config) (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py&quot;, line 125, in _initialize_model (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return model_class(vllm_config=vllm_config, prefix=prefix) (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 496, in __init__ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.model = self._init_model(vllm_config=vllm_config, (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 533, in _init_model (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return LlamaModel(vllm_config=vllm_config, prefix=prefix) (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py&quot;, line 151, in __init__ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 326, in __init__ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.start_layer, self.end_layer, self.layers = make_layers( (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py&quot;, line 558, in make_layers (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] maybe_offload_to_cpu(layer_fn(prefix=f&quot;{prefix}.{idx}&quot;)) (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 328, in &lt;lambda&gt; (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] lambda prefix: layer_type(config=config, (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 254, in __init__ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.mlp = LlamaMLP( (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 70, in __init__ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.gate_up_proj = MergedColumnParallelLinear( (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py&quot;, line 441, in __init__ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] super().__init__(input_size=input_size, (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py&quot;, line 314, in __init__ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] self.quant_method.create_weights( (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py&quot;, line 129, in create_weights (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] weight = Parameter(torch.empty(sum(output_partition_sizes), (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] File &quot;/usr/local/lib/python3.12/dist-packages/torch/utils/_device.py&quot;, line 106, in __torch_function__ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] return func(*args, **kwargs) (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=349) ERROR 09-12 08:36:30 multiproc_worker_utils.py:242] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 224.00 MiB. GPU 2 has a total capacity of 23.64 GiB of which 164.81 MiB is free. Process 25378 has 23.47 GiB memory in use. Of the allocated memory 22.91 GiB is allocated by PyTorch, and 24.14 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) ERROR 09-12 08:36:30 engine.py:400] CUDA out of memory. Tried to allocate 224.00 MiB. GPU 0 has a total capacity of 23.64 GiB of which 164.81 MiB is free. Process 25094 has 23.47 GiB memory in use. Of the allocated memory 22.91 GiB is allocated by PyTorch, and 24.14 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) ERROR 09-12 08:36:30 engine.py:400] Traceback (most recent call last): ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py&quot;, line 391, in run_mp_engine ERROR 09-12 08:36:30 engine.py:400] engine = MQLLMEngine.from_engine_args(engine_args=engine_args, ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py&quot;, line 124, in from_engine_args ERROR 09-12 08:36:30 engine.py:400] return cls(ipc_path=ipc_path, ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py&quot;, line 76, in __init__ ERROR 09-12 08:36:30 engine.py:400] self.engine = LLMEngine(*args, **kwargs) ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py&quot;, line 273, in __init__ ERROR 09-12 08:36:30 engine.py:400] self.model_executor = executor_class(vllm_config=vllm_config, ) ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py&quot;, line 271, in __init__ ERROR 09-12 08:36:30 engine.py:400] super().__init__(*args, **kwargs) ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py&quot;, line 52, in __init__ ERROR 09-12 08:36:30 engine.py:400] self._init_executor() ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/executor/mp_distributed_executor.py&quot;, line 125, in _init_executor ERROR 09-12 08:36:30 engine.py:400] self._run_workers(&quot;load_model&quot;, ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/executor/mp_distributed_executor.py&quot;, line 185, in _run_workers ERROR 09-12 08:36:30 engine.py:400] driver_worker_output = run_method(self.driver_worker, sent_method, ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/utils.py&quot;, line 2196, in run_method ERROR 09-12 08:36:30 engine.py:400] return func(*args, **kwargs) ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py&quot;, line 183, in load_model ERROR 09-12 08:36:30 engine.py:400] self.model_runner.load_model() ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py&quot;, line 1112, in load_model ERROR 09-12 08:36:30 engine.py:400] self.model = get_model(vllm_config=self.vllm_config) ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init__.py&quot;, line 14, in get_model ERROR 09-12 08:36:30 engine.py:400] return loader.load_model(vllm_config=vllm_config) ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py&quot;, line 406, in load_model ERROR 09-12 08:36:30 engine.py:400] model = _initialize_model(vllm_config=vllm_config) ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py&quot;, line 125, in _initialize_model ERROR 09-12 08:36:30 engine.py:400] return model_class(vllm_config=vllm_config, prefix=prefix) ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 496, in __init__ ERROR 09-12 08:36:30 engine.py:400] self.model = self._init_model(vllm_config=vllm_config, ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 533, in _init_model ERROR 09-12 08:36:30 engine.py:400] return LlamaModel(vllm_config=vllm_config, prefix=prefix) ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py&quot;, line 151, in __init__ ERROR 09-12 08:36:30 engine.py:400] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 326, in __init__ ERROR 09-12 08:36:30 engine.py:400] self.start_layer, self.end_layer, self.layers = make_layers( ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py&quot;, line 558, in make_layers ERROR 09-12 08:36:30 engine.py:400] maybe_offload_to_cpu(layer_fn(prefix=f&quot;{prefix}.{idx}&quot;)) ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 328, in &lt;lambda&gt; ERROR 09-12 08:36:30 engine.py:400] lambda prefix: layer_type(config=config, ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 254, in __init__ ERROR 09-12 08:36:30 engine.py:400] self.mlp = LlamaMLP( ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py&quot;, line 70, in __init__ ERROR 09-12 08:36:30 engine.py:400] self.gate_up_proj = MergedColumnParallelLinear( ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py&quot;, line 441, in __init__ ERROR 09-12 08:36:30 engine.py:400] super().__init__(input_size=input_size, ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py&quot;, line 314, in __init__ ERROR 09-12 08:36:30 engine.py:400] self.quant_method.create_weights( ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py&quot;, line 129, in create_weights ERROR 09-12 08:36:30 engine.py:400] weight = Parameter(torch.empty(sum(output_partition_sizes), ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] File &quot;/usr/local/lib/python3.12/dist-packages/torch/utils/_device.py&quot;, line 106, in __torch_function__ ERROR 09-12 08:36:30 engine.py:400] return func(*args, **kwargs) ERROR 09-12 08:36:30 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 08:36:30 engine.py:400] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 224.00 MiB. GPU 0 has a total capacity of 23.64 GiB of which 164.81 MiB is free. Process 25094 has 23.47 GiB memory in use. Of the allocated memory 22.91 GiB is allocated by PyTorch, and 24.14 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) ERROR 09-12 08:36:30 multiproc_worker_utils.py:124] Worker VllmWorkerProcess pid 348 died, exit code: -15 INFO 09-12 08:36:30 multiproc_worker_utils.py:128] Killing local vLLM worker processes 使用上面的配置后还是报错
09-13
基于可靠性评估序贯蒙特卡洛模拟法的配电网可靠性评估研究(Matlab代码实现)内容概要:本文围绕&ldquo;基于可靠性评估序贯蒙特卡洛模拟法的配电网可靠性评估研究&rdquo;,介绍了利用Matlab代码实现配电网可靠性的仿真分析方法。重点采用序贯蒙特卡洛模拟法对配电网进行长时间段的状态抽样与统计,通过模拟系统元件的故障与修复过程,评估配电网的关键可靠性指标,如系统停电频率、停电持续时间、负荷点可靠性等。该方法能够有效处理复杂网络结构与设备时序特性,提升评估精度,适用于含分布式电源、电动汽车等新型负荷接入的现代配电网。文中提供了完整的Matlab实现代码与案例分析,便于复现和扩展应用。; 适合人群:具备电力系统基础知识和Matlab编程能力的高校研究生、科研人员及电力行业技术人员,尤其适合从事配电网规划、运行与可靠性分析相关工作的人员; 使用场景及目标:①掌握序贯蒙特卡洛模拟法在电力系统可靠性评估中的基本原理与实现流程;②学习如何通过Matlab构建配电网仿真模型并进行状态转移模拟;③应用于含新能源接入的复杂配电网可靠性定量评估与优化设计; 阅读建议:建议结合文中提供的Matlab代码逐段调试运行,理解状态抽样、故障判断、修复逻辑及指标统计的具体实现方式,同时可扩展至不同网络结构或加入更多不确定性因素进行深化研究。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值