Loading_DLL_from_Memory

博客给出了链接http://intechhosting.com/~access/forums/index.php?s=3840cfe9ecee3b80273d08d01990b65f&act=Attach&type=post&id=780 ,原作者为Shub - Nigurrath,标签包含dll。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

by:Shub-Nigurrath

http://intechhosting.com/~access/forums/index.php?s=3840cfe9ecee3b80273d08d01990b65f&act=Attach&type=post&id=780
内存加载动态库 MemoryLoadLibrary 有例子。 /* * Memory DLL loading code * Version 0.0.3 * * Copyright (c) 2004-2013 by Joachim Bauch / mail@joachim-bauch.de * http://www.joachim-bauch.de * * The contents of this file are subject to the Mozilla Public License Version * 2.0 (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * http://www.mozilla.org/MPL/ * * Software distributed under the License is distributed on an "AS IS" basis, * WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License * for the specific language governing rights and limitations under the * License. * * The Original Code is MemoryModule.h * * The Initial Developer of the Original Code is Joachim Bauch. * * Portions created by Joachim Bauch are Copyright (C) 2004-2013 * Joachim Bauch. All Rights Reserved. * */ #ifndef __MEMORY_MODULE_HEADER #define __MEMORY_MODULE_HEADER #include typedef void *HMEMORYMODULE; typedef void *HMEMORYRSRC; typedef void *HCUSTOMMODULE; #ifdef __cplusplus extern "C" { #endif typedef HCUSTOMMODULE (*CustomLoadLibraryFunc)(LPCSTR, void *); typedef FARPROC (*CustomGetProcAddressFunc)(HCUSTOMMODULE, LPCSTR, void *); typedef void (*CustomFreeLibraryFunc)(HCUSTOMMODULE, void *); /** * Load DLL from memory location. * * All dependencies are resolved using default LoadLibrary/GetProcAddress * calls through the Windows API. */ HMEMORYMODULE MemoryLoadLibrary(const void *); /** * Load DLL from memory location using custom dependency resolvers. * * Dependencies will be resolved using passed callback methods. */ HMEMORYMODULE MemoryLoadLibraryEx(const void *, CustomLoadLibraryFunc, CustomGetProcAddressFunc, CustomFreeLibraryFunc, void *); /** * Get address of exported method. */ FARPROC MemoryGetProcAddress(HMEMORYMODULE, LPCSTR); /** * Free previously loaded DLL. */ void MemoryFreeLibrary(HMEMORYMODULE); /** * Find the location of
PS C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui> & .\.venv\Scripts\python.exe -m app.lora_trainer INFO:__main__:[1] 加载 Tokenizer... INFO:__main__:[2] 加载并 Tokenize 数据... The OrderedVocab you are attempting to save contains holes for indices [3, 4, 5, 9, 10, 11, 12, 13], your vocabulary could be corrupted ! Map: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 618063/618063 [14:21<00:00, 717.79 examples/s] INFO:__main__:[3] 加载模型 + LoRA 适配... INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk). Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [01:19<00:00, 39.88s/it] ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ binary_path: C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll CUDA SETUP: Loading binary C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll... Could not find module 'C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll' (or one of its dependencies). Try using the full path with constructor syntax. CUDA SETUP: Loading binary C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll... Could not find module 'C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll' (or one of its dependencies). Try using the full path with constructor syntax. CUDA SETUP: Loading binary C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll... Could not find module 'C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll' (or one of its dependencies). Try using the full path with constructor syntax. CUDA SETUP: Problem: The main issue seems to be that the main CUDA library was not detected. CUDA SETUP: Solution 1): Your paths are probably not up-to-date. You can update them via: sudo ldconfig. CUDA SETUP: Solution 2): If you do not have sudo rights, you can do the following: CUDA SETUP: Solution 2a): Find the cuda library via: find / -name libcuda.so 2>/dev/null CUDA SETUP: Solution 2b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_2a CUDA SETUP: Solution 2c): For a permanent solution add the export from 2b into your .bashrc file, located at ~/.bashrc Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\app\lora_trainer.py", line 117, in <module> trainer = LoRATrainer(config) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\app\lora_trainer.py", line 30, in __init__ self._load_model() File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\app\lora_trainer.py", line 82, in _load_model self.model = get_peft_model(self.model, lora_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\mapping.py", line 193, in get_peft_model return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type]( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\peft_model.py", line 1609, in __init__ super().__init__(model, peft_config, adapter_name, **kwargs) File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\peft_model.py", line 171, in __init__ self.base_model = cls(model, {adapter_name: peft_config}, adapter_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\tuners\lora\model.py", line 141, in __init__ super().__init__(model, config, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage) File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\tuners\tuners_utils.py", line 184, in __init__ self.inject_adapter(self.model, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage) File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\tuners\tuners_utils.py", line 496, in inject_adapter self._create_and_replace(peft_config, adapter_name, target, target_name, parent, current_key=key) File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\tuners\lora\model.py", line 226, in _create_and_replace new_module = self._create_new_module(lora_config, adapter_name, target, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\tuners\lora\model.py", line 321, in _create_new_module from .bnb import dispatch_bnb_8bit File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\tuners\lora\bnb.py", line 19, in <module> import bitsandbytes as bnb File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\__init__.py", line 7, in <module> from .autograd._functions import ( File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\autograd\__init__.py", line 1, in <module> from ._functions import undo_layout, get_inverse_transform_indices File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\autograd\_functions.py", line 9, in <module> File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\functional.py", line 17, in <module> from .cextension import COMPILED_WITH_CUDA, lib File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cextension.py", line 22, in <module> raise RuntimeError(''' RuntimeError: CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs above to fix your environment! If you cannot find any issues and suspect a bug, please open an issue with detals about your environment: https://github.com/TimDettmers/bitsandbytes/issues
最新发布
07-11
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值