could not load dll or one of its dependency

本文介绍了如何通过反射机制加载非.exe目录下的.NET DLL文件及其依赖,并提供了一个具体的代码示例来解决DLL及其依赖项的加载问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

我们知道dotnet的dll的依赖dll,要么加入GAC要么就要在exe的目录下才能反射加载,放到path目录下都不行。

不过网上通过这种加载搜索的方式可以实现非exe目录的依赖dll加载。

a. My C# program will load a dll (which is dynamic), for now let's take a.dll (similarly my program will load more dll like b.dll, c.dll, etc....).

b. My program will invoke a method "Onstart" inside a.dll (it's constant for all the dll).

I am able to achieve the above 2 cases by reflection mechanism.

The problem is

a. If my a.dll have any reference say xx.dll or yy.dll, then when I try to Invoke

OnStart method of a.dll from my program. I am getting "could not load dll or one of its dependency". See the code snippet

Assembly assm = Assembly.LoadFrom(@"C:/Balaji/Test/a.dll");foreach (Type tp in assm.GetTypes()){  if (tp.IsClass)  {    MethodInfo mi = tp.GetMethod("OnStart");    if (mi != null)   {      object obj = Activator.CreateInstance(tp);      mi.Invoke(obj,null);      break;    }   } }

typically i am getting error on the line "object obj = Activator.CreateInstance(tp);" this is because a.dll has reference of xx.dll, but in my program i don't have the reference of xx.dll. Also, I cannot have the reference of xx.dll in my program because a.dll is a external assembly and can have any reference on it's own.

Basically, in the AssemblyResolve event, you need to load the referenced assemblies manually.

private Assembly AssemblyResolveHandler(object sender,ResolveEventArgs e){    try    {        string[] assemblyDetail = e.Name.Split(',');        string assemblyBasePath = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);        Assembly assembly = Assembly.LoadFrom(assemblyBasePath + @"/" + assemblyDetail[0] + ".dll");        return assembly;    }    catch (Exception ex)    {        throw new ApplicationException("Failed resolving assembly", ex);    }}

Not the best code, but should give you a general idea, I hope.

I do, however, agree that plugin-dlls should be packaged for complete, dependancy-less use. If they are allowed to load assemblies you don't have control over, then who knows what might happen.

PS C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui> & .\.venv\Scripts\python.exe -m app.lora_trainer INFO:__main__:[1] 加载 Tokenizer... INFO:__main__:[2] 加载并 Tokenize 数据... The OrderedVocab you are attempting to save contains holes for indices [3, 4, 5, 9, 10, 11, 12, 13], your vocabulary could be corrupted ! Map: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 618063/618063 [14:21<00:00, 717.79 examples/s] INFO:__main__:[3] 加载模型 + LoRA 适配... INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk). Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [01:19<00:00, 39.88s/it] ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ binary_path: C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll CUDA SETUP: Loading binary C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll... Could not find module 'C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll' (or one of its dependencies). Try using the full path with constructor syntax. CUDA SETUP: Loading binary C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll... Could not find module 'C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll' (or one of its dependencies). Try using the full path with constructor syntax. CUDA SETUP: Loading binary C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll... Could not find module 'C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll' (or one of its dependencies). Try using the full path with constructor syntax. CUDA SETUP: Problem: The main issue seems to be that the main CUDA library was not detected. CUDA SETUP: Solution 1): Your paths are probably not up-to-date. You can update them via: sudo ldconfig. CUDA SETUP: Solution 2): If you do not have sudo rights, you can do the following: CUDA SETUP: Solution 2a): Find the cuda library via: find / -name libcuda.so 2>/dev/null CUDA SETUP: Solution 2b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_2a CUDA SETUP: Solution 2c): For a permanent solution add the export from 2b into your .bashrc file, located at ~/.bashrc Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\app\lora_trainer.py", line 117, in <module> trainer = LoRATrainer(config) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\app\lora_trainer.py", line 30, in __init__ self._load_model() File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\app\lora_trainer.py", line 82, in _load_model self.model = get_peft_model(self.model, lora_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\mapping.py", line 193, in get_peft_model return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type]( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\peft_model.py", line 1609, in __init__ super().__init__(model, peft_config, adapter_name, **kwargs) File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\peft_model.py", line 171, in __init__ self.base_model = cls(model, {adapter_name: peft_config}, adapter_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\tuners\lora\model.py", line 141, in __init__ super().__init__(model, config, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage) File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\tuners\tuners_utils.py", line 184, in __init__ self.inject_adapter(self.model, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage) File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\tuners\tuners_utils.py", line 496, in inject_adapter self._create_and_replace(peft_config, adapter_name, target, target_name, parent, current_key=key) File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\tuners\lora\model.py", line 226, in _create_and_replace new_module = self._create_new_module(lora_config, adapter_name, target, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\tuners\lora\model.py", line 321, in _create_new_module from .bnb import dispatch_bnb_8bit File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\peft\tuners\lora\bnb.py", line 19, in <module> import bitsandbytes as bnb File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\__init__.py", line 7, in <module> from .autograd._functions import ( File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\autograd\__init__.py", line 1, in <module> from ._functions import undo_layout, get_inverse_transform_indices File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\autograd\_functions.py", line 9, in <module> File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\functional.py", line 17, in <module> from .cextension import COMPILED_WITH_CUDA, lib File "C:\Users\vipuser\Documents\ai_writer_project_final_with_fixed_output_ui\.venv\Lib\site-packages\bitsandbytes\cextension.py", line 22, in <module> raise RuntimeError(''' RuntimeError: CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs above to fix your environment! If you cannot find any issues and suspect a bug, please open an issue with detals about your environment: https://github.com/TimDettmers/bitsandbytes/issues
最新发布
07-11
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值