Max加载plugin的时候出现的error 126(找不到相关模块)解决

本文介绍了如何解决Max在加载插件时遇到的error126错误,即找不到相关模块的问题。通过使用DependencyWalker工具定位缺失的dll文件,并指导开发者如何解决物理文件不存在或路径不对导致的dll缺失问题。最后,文章提供了调试插件代码的建议。
部署运行你感兴趣的模型镜像

很多时候我们会碰到max加载插件的时候会出现error 126错误,也就是找不到相关的模块。出现这种错误的时候,max还没有能够呼叫到我们写的相关DLL的入口函数,因此不适合用我们调试插件代码的方式。这里我们从根本上来看一下这个错误,继而来用相关的工具来解决此问题。


错误中的找不到模块(如果是英文版,则是module),根本上来说是找不到一定的dll文件。原理上,我们写的每个插件本身是一个dll,在链接器链接代码的时候,写进不少这个dll需要哪些其他的dll来伴随运行的信息。因此,相关的加载模块会在程序(操作系统提供相应API)启动的过程中扫描系统路径和相关的用户路径下是否有该dll需要的其他支撑dll(模块),如果有,则加载dll成功,否则,则加载失败。


从这个原理上来讲,max启动失败必定是发现加载我们的插件失败,因此而报错。因此,我们首先知道是哪个DLL缺失了。可惜的是,max并不告诉我们究竟是缺失哪个DLL。那么剩下的问题就是通过相应的方式找到缺了哪个DLL。当然,缺失的原因也有两种,一种是物理文件不存在,就是整个硬盘上就没有相应的dll;另外一种是存在相应的dll,但是因为路径不对,所以,系统程序没有办法找到。不管如何,我们需要首先来定位相应缺失的是什么。


这里有个工具,工具名字叫Dependency Walker,这个工具能打开我们的模块文件,然后查询相关的依赖信息。用法很简单,File | Open,找到我们的插件文件打开就可以。如下界面:




看上图,所有的被Dependicy Walker打上问号的可能是有问题的。注意这里我说的是可能有问题,因为这个软件使用的工作目录等信息决定了它报出的错误信息的相对正确性。所以,不是每个问号都是真的有问题。但是,有问题的dll(缺失的dll)一定包含在问号列表之中。因此,这个软件非常有用的地方是,能帮我们列出模块,并缩小模块的范围。


关于工具就这么多,此工具足够了。


如果遇到此类问题,解决方式就相对简单了。运行dependency walker,看一下这个dll需要的相关模块。定位出哪个模块可能路径有问题,或者是缺失。然后解决这个问题就可以。解了这个问题后,剩下的问题就可以用调试器解决。对于开发人员来说,“源码之前了无秘密”,到调试器里就是个人造化了。

您可能感兴趣的与本文相关的镜像

Stable-Diffusion-3.5

Stable-Diffusion-3.5

图片生成
Stable-Diffusion

Stable Diffusion 3.5 (SD 3.5) 是由 Stability AI 推出的新一代文本到图像生成模型,相比 3.0 版本,它提升了图像质量、运行速度和硬件效率

# !/usr/bin/env python3 # -*- coding: utf-8 -*- """ 说明:此文件是插件脚本的入口Py文件模板,任何其他插件脚本要加入Hubble,需以此模板为基础进行扩展开发 注意事项: 1. 插件文件命名和所在文件夹相同,例如DemoPlugin下面的DemoPlugin.py 2. 在HubblePlatform.py的trigger_plugin函数中,要更新配置变量plugin_property_dict(以后会放到线上去) 3. 新引入的Python3模块需要告知Hubble开发团队的后台部署人员 WX601464,pip3 install NewPackage.py后才能正常运行,否则会导致后台程序错误! 4. 任何问题请联系?WX601464/w30016103 """ import os import Hubble.utils.basic.HiviewLogger as HiviewLogger import re import datetime from datetime import datetime, timedelta import json class PluginClass(): """ Hubble程序会自动检测PluginClass名称的类,进行加工 1. self.logger: Hubble日志打印模块变量,可打印详细日志信息,便于后期调试,推荐使用self.logger.info("xxx"),本地调试时请注释self.logger相关命令。 2. self.plugin_result: Hubble能识别到的输出接口,最小填写量"conclusion""reason_info",具体解释请见变量定义处 3. plugin_preprocess: 预处理Hubble传入的info_dict,包含程序运行的基本信息 4. run_plugin_single: 单日志分析函数,插件脚本的分析能力体现于此 5. plugin_result_refinement: 对于批量处理多日志情况,run_plugin_single的单日志处理结果会汇总成一个结果列表, 基于此列表,用户可以进行二次精细化汇总,例如统计平均值等 """ def __init__(self): """ 初始化函数 """ self.logger = HiviewLogger.HiviewLogger().get_logger() self.plugin_result = { # 填好对应项,会在Hubble单任务分析报告里看见结果展示 "conclusion": { # 分析结论(定界结论)展示 "level_one": "", # 一级定界 "level_two": "", # 二级定界 "reason_info": "", # 根因信息 "solution": "", "weight_level": "" # 脚本权重,界面展示时作为排序依据,有效值范围[5,10] }, "analysis_process": { # 分析结论(定界过程)展示,列表中每一个dict元素代表定界过程中的一步 "show_order": [ # 展示顺序, 定义"1", "2", ...步骤 "1", "2" ], "detailed_process": { # 按show_order的步骤取出过程 "1": { "details": "", # 第一步详细信息(做了哪些处理总结下,例如在applogcat中到Hubble关键字) "CN": "" # 步骤名称(例如到applogcat文件) }, "2": { "details": "", # 第二步详细信息 "CN": "" # 步骤名称 } } }, "feature_data": { # 调用相似问题识别能力,需先填好此变量,可能有多个识别到的故障及其特征 "": [] # key值代表发现的故障类型,value值是list of string,由识别到的特征字符串组成 } } # 将'05-12 14:51:55'的时间格式转换成时间2024-05-12 14:51:55 def log_str_to_datetime(self, time_str): issue_time = datetime.strptime(time_str, '%m-%d %H:%M:%S') current_year = datetime.now().year issue_time_with_current_year = issue_time.replace(year=current_year) return issue_time_with_current_year def extract_datetime(self, string): pattern = r"_(\d{8})_(\d{4})" match = re.search(pattern, string) if match: return match.group(1) + "_" + match.group(2) else: return None # 1整机低内存 def memAvailable(self, hilog): for line in hilog: monsc_re = re.compile( r'(?P<timestamp>\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}).*curBufKB=(?P<curBufKB>\d+)').search(line) if monsc_re: timestamp = monsc_re.group("timestamp") memAvailable_value = int(monsc_re.group('curBufKB')) if int(memAvailable_value) < 819200: return True, timestamp return False, None # 2杀后台 def am_kill(self, hilog): esja_re = re.compile( r'(?P<timestamp>\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}).*KillOneProc kill_reason=LowMemoryKill.*procName=(?P<package>[a-zA-Z0-9._]+)' ) for line in hilog: if 'KillOneProc kill_reason=LowMemoryKill' in line: match = esja_re.search(line) if match: timestamp = match.group("timestamp") kill_proc_name = match.group("package") return True, timestamp, kill_proc_name return False, None, None # 4整机温度过高 def temperature(self, hilog): for line in hilog: frame_re = re.compile(r'(?P<timestamp>\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}).*shell_frame] new temp: (.*) old temp').search(line) if frame_re: timestamp = frame_re.groups("timestamp") frame_value = int(frame_re.group("new temp")) if int(frame_value) > 43: return True, timestamp return False, None # 5整机存储不足 def storage(self, hilog): round_size = 0 free_size = 0 for line in hilog: round_se = re.compile('roundSize=(.*),').search(line) free_se = re.compile('freeSize=(.*)').search(line) if round_se: round_tuples_first = round_se.groups() round_tuples = round_tuples_first[0].split(',') round_value = int(round_tuples[0]) round_size = round_value if free_se: free_tuple = free_se.groups() free_value = int(free_tuple[0]) free_size = free_value if round_size and free_size: tmp = free_size / round_size if tmp <= 0.15: return 1 return 0 # 6整机碎片化过高 def free_sec(self, hilog_kmsg): for line in hilog_kmsg: free_sec_re = re.compile('free_sec=(.*),reserved_sec').search(line) free_re = re.compile(',Free =(.*)MB').search(line) if free_sec_re and free_re: free_sec_tuples = free_sec_re.groups() free_sec_value = free_sec_tuples[0] free_tuples = free_re.groups() free_value = free_tuples[0] res_value = 1 - (int(free_sec_value) * 2) / int(free_value) if res_value > 0.8: return 1 return 0 # 7网络不佳 def ps_slow(self, hilog): num_chr = 0 for line in hilog: slow_re = re.compile(r'(?P<timestamp>\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}).*CHR name=(.*), type').search(line) if slow_re: timestamp = slow_re.groups("timestamp") slow_value = slow_re.group("name") if slow_value == 'PS_SLOW_EVENT': num_chr += 1 if num_chr >= 5: return True,timestamp return False,None # 8电池电量过低 def power_low(self, hilog): for line in hilog: power_re = re.compile(r'(?P<timestamp>\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}).*powermgr/BatteryInfo: capacity=(.*), voltage').search(line) if power_re: power_tuples = power_re.groups() power_value = power_tuples[0] if int(power_value) < 10: return 1 return 0 # 9电池电压过低 def voltage(self, hilog): for line in hilog: voltage_re = re.compile(r'(?P<timestamp>\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}).*powermgr/BatteryInfo: (.*), voltage=(.*), temperature').search(line) if voltage_re: timestamp = voltage_re.groups("timestamp") voltage_value = voltage_re.group("voltage") if int(voltage_value) < 3500000: return True,timestamp return False,None # 10 整机CPU高负载 def cup_load(self, hilog): for line in hilog: cpu_re = re.compile(r'(?P<timestamp>\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}).*cpuLoadPercent:(.*), thermalLevel').search(line) if cpu_re: timestamp = cpu_re.groups("timestamp") cpu_value = cpu_re.group("cpuloadpercent") if int(cpu_value) > 600000: return True,timestamp return False,None # 11 整机GPU高负载 def gpu_load(self, hilog): for line in hilog: gpu_re = re.compile(r'(?P<timestamp>\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}).*Get GpuTotalUsed value is (.*)').search(line) if gpu_re: timestamp = gpu_re.groups("timestamp") gpu_value = gpu_re.group(1) if int(gpu_value) > 600000: return True,timestamp return False,None def plugin_preprocess(self, info_dict=None): """ 预处理Hubble传入的info_dict,包含程序运行的基本信息 此函数的基础操作是获取下载日志列表地址,形成task_list任务分析列表,供Hubble调用 """ self.logger.info("这是预处理函数?") task_list = [] download_list = info_dict["common_info"]["log_download_list"] max_analysis_num = 1 # 最多处理几个日志 if len(download_list) > max_analysis_num: download_list = download_list[:max_analysis_num] for download_item in download_list: task_dict = {"download_addr": [], "task_params": {}} task_dict["download_addr"] = download_item task_list.append(task_dict) return task_list def run_plugin_single(self, uuid_log_path=None, log_path=None, param_dict=None): """ 主脚本分析逻辑,分析单日志中存在的故障,存入self.plugin_result 1. uuid_log_path: Hubble单任务的顶级分析路径,包含生成的基本信息文件、结果文件和递归解压的日志文件夹 2. log_path: 递归解压的顶级日志文件夹,一般是uuid_log_path的子文件 3. param_dict: 脚本程序所需的参数信息,例如日志来源、领域等, --------------------------------------------------- 参数示例 BetaClub的param_dict示例: param_dict = {"common_info":{'happen_time': '20230225182000', 'betano': '2103539765', 'version': 'FCO-AL00 3.1.0.115(SP2C00E110R2P3log)', 'product': 'FCO-AL00', 'logsource': 'BetaClub', 'beta_info': {'quesDetail': '图库退出卡顿。', 'faultNameLv1': '性能', 'faultNameLv2': '整机操作各种卡', 'activityName': '【旗舰手机PDU】【关键】FCO-AL00 Beta测试', 'faultCode': '1008_1002'}, 'happen_time_unixtime': 1677320400000, 'label': '动效卡顿', 'l_pkgs': ['com.huawei.photos']}} APR的param_dict示例: param_dict = {'common_info': {'version': 'MNA-AL00 3.1.0.125(SP1DEMC735E126R2P2)', 'eventid': '901001002','errorcode2': '', 'errorcode4': '', 'logsource': 'AresOcean', 'errorcode3': '', 'product': 'MNA-AL00', 'datetime_str': '20230327112414'}} 建议通过字典的get方法进行取值,如可通过下述语句获取用户描述和产品名称: brief = param_dict.get("common_info", {}).get("beta_info", {}).get("quesDetail", "") product = param_dict.get("common_info", {}).get("product", "") """ self.logger.info("这是测试插件!uuid_log_path=[%s], log_path=[%s], param_dict=[%s]", uuid_log_path, log_path, param_dict) # 此句为DemoPlugin.py独有,可删除 # 此代码块为文件路径的遍历方法,以hiapplogcat-log为例 kmsg_path = [] app_path = [] for root, dirs, files in os.walk(uuid_log_path): for file in files: if "hilog." in file: # 可替换为其它日志类型 app_path.append(os.path.join(root, file)) if "hilog_kmsg" in file: kmsg_path.append(os.path.join(root, file)) level_one = "稳定与流畅" # 临时二级定界字符串 conclusion = '详情请看根因分析' # 打开app日志 app_list = [] for app in app_path: with open(app, "r", encoding="utf-8", errors='ignore') as app_reader: app_line = app_reader.readlines() app_list += app_line kmsg_list = [] for kmsg in kmsg_path: with open(kmsg, "r", encoding="utf-8", errors='ignore') as app_reader: app_line = app_reader.readlines() app_list += app_line memAvailable_flag, nenAvailable_timestamp = self.memAvailable(app_list) am_kill_flag, am_kill_timestamp, am_kill_name = self.am_kill(app_list) temperature_flag,temperature_timestamp = self.temperature(app_list) storage = self.storage(app_list) free_sec = self.free_sec(kmsg_list) ps_slow_flag,ps_slow_timestamp = self.ps_slow(app_list) power_low = self.power_low(app_list) voltage_flag,voltage_timestamp = self.voltage(app_list) cup_load_flag,cpu_load_timestamp = self.cup_load(app_list) gpu_load_flag,gpu_load_timestamp = self.gpu_load(app_list) memAvailable_result = ['整机低内存(RAM)','1、清除后台应用:桌面手势导航底部上划进入多任务中心,清理后台多任务应用卡片(三键导航点击导航键的方块图标可以进入多任务界面)2、长按电源键关机重启'] am_kill_result = ['因低内存被查杀','1、清除后台应用:桌面手势导航底部上划进入多任务中心,清理后台多任务应用卡片(三键导航点击导航键的方块图标可以进入多任务界面)2、长按电源键关机重启'] temperature_result = ['整机温度过高', '建议参考手机/平板使用过程中设备发热相关知识排查'] storage_result = ['整机可用存储空间不足','建议参考“华为手机/平板内存占用多,提示内存不足如何处理? ”知识中场景一:存储剩余空间不足排查方案处理。'] free_sec_result = ['整机碎片化过高','1、建议引导用户进入设置>存储,进行整机清理加速,清理不需要的垃圾文件、联网缓存及不常用的应用;2、建议用户尝试夜间熄屏充电,持续此操作2-3晚上清理碎片化,促进系统优化'] ps_slow_result = ['整机网络不佳', '当前所处网络环境覆盖较弱或干扰较大,建议引导用户更换网络环境'] power_low_result = ['整机低电量', '建议保证在电量充足情况下继续使用体验'] voltage_result = ['整机低电压', '建议检查电池健康度并在电量充足情况下继续使用体验'] cup_load_result = ['整机CPU高负载','1、清除后台应用:桌面手势导航底部上划进入多任务中心,清理后台多任务应用卡片(三键导航点击导航键的方块图标可以进入多任务界面);2、引导用户排查是否存在高速下载、 数据传输、多悬浮窗口使用等场景,以上高负载场景会导致设备卡顿,建议用户避免长时间使用上述场景;3、长按电源键关机重启;'] gpu_load_result = ['整机GPU高负载','1、清除后台应用:桌面手势导航底部上划进入多任务中心,清理后台多任务应用卡片(三键导航点击导航键的方块图标可以进入多任务界面);2、引导用户排查是否存在高速下载、 数据传输、多悬浮窗口使用等场景,以上高负载场景会导致设备卡顿,建议用户避免长时间使用上述场景;3、长按电源键关机重启;'] con_res = '' solu_res = '' if memAvailable_flag: con_res += nenAvailable_timestamp + '出现了' + memAvailable_result[0] + '<br>' if memAvailable_result[1] not in solu_res: solu_res += memAvailable_result[1] + '<br>' if am_kill_flag: con_res += am_kill_timestamp + am_kill_name + am_kill_result[0] + '<br>' if am_kill_result[1] not in solu_res: solu_res += am_kill_result[1] + '<br>' if temperature_flag: con_res += temperature_timestamp.strftime("%Y-%m-%d %H:%M:%S")+temperature_result[0] + '<br>' if temperature_result[1] not in solu_res: solu_res += temperature_result[1] + '<br>' if storage: con_res += storage_result[0] + '<br>' if storage_result[1] not in solu_res: solu_res += storage_result[1] + '<br>' if free_sec: con_res += free_sec_result[0] + '<br>' if free_sec_result[1] not in solu_res: solu_res += free_sec_result[1] + '<br>' if ps_slow_flag: con_res += ps_slow_timestamp+ps_slow_result[0] + '<br>' if ps_slow_result[1] not in solu_res: solu_res += ps_slow_result[1] + '<br>' if power_low: con_res += power_low_result[0] + '<br>' if power_low_result[1] not in solu_res: solu_res += power_low_result[1] + '<br>' if voltage_flag: con_res += voltage_timestamp + voltage_result[0] + '<br>' if voltage_result[1] not in solu_res: solu_res += voltage_result[1] + '<br>' if cup_load_flag: con_res += cpu_load_timestamp + cup_load_result[0] + '<br>' if cup_load_result[1] not in solu_res: solu_res += cup_load_result[1] + '<br>' if gpu_load_flag: con_res += gpu_load_timestamp + gpu_load_result[0] + '<br>' if gpu_load_result[1] not in solu_res: solu_res += gpu_load_result[1] + '<br>' if not con_res: self.plugin_result["conclusion"]['level_one'] = level_one # 一级定界,反映插件功能 self.plugin_result["conclusion"]['level_two'] = '整机卡顿' # 二级定界,根据插件得到的不同结论进行调整 self.plugin_result["conclusion"]['reason_info'] = '当前整机各项指标未诊断出异常' # 二级定界,根据插件得到的不同结论进行调整 self.plugin_result["conclusion"]['solution'] = '建议引导用户重新复现问题反馈日志' # 二级定界,根据插件得到的不同结论进行调整 return self.plugin_result # 如果想直接利用Hubble前端分析报告的展示项,请按定义填写好self.plugin_result变量 else: self.plugin_result["conclusion"]['level_one'] = level_one # 一级定界,反映插件功能 self.plugin_result["conclusion"]['level_two'] = '整机卡顿' # 二级定界,根据插件得到的不同结论进行调整 self.plugin_result["conclusion"]['reason_info'] = con_res # 二级定界,根据插件得到的不同结论进行调整 self.plugin_result["conclusion"]['solution'] = solu_res # 二级定界,根据插件得到的不同结论进行调整 self.logger.info(self.plugin_result) return self.plugin_result # 如果想直接利用Hubble前端分析报告的展示项,请按定义填写好self.plugin_result变量 def plugin_result_refinement(self, result_list=None): """ 二次精细化加工产生的此脚本分析结果列表? result_list中是self.plugin_result元素的列表集合 也即result_list = [self.plugin_result_1, self.plugin_result_2,...] """ self.logger.info("这是分析结果精加工函数") return result_list # 如果是此分析脚本定制化的分析结果数据结构,需要和Hubble对齐前端取数逻辑才能将分析结论显示在分析报告内 帮我检查一下有什么错误么
09-09
已经在下面这关在的配置文件里配置了 # # Copyright 1999-2025 Alibaba Group Holding Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # #--------------- Nacos Common Configurations ---------------# #*************** Nacos port Related Configurations ***************# ### Nacos Server Main port nacos.server.main.port=${NACOS_APPLICATION_PORT:8848} #*************** Network Related Configurations ***************# ### If prefer hostname over ip for Nacos server addresses in cluster.conf: # nacos.inetutils.prefer-hostname-over-ip=false ### Specify local server's IP: # nacos.inetutils.ip-address= #*************** Datasource Related Configurations ***************# ### nacos.plugin.datasource.log.enabled=true pring.datasource.platform=mysql ### Count of DB: # db.num=1 ### Connect URL of DB: db.num=1 db.url.0=jdbc:mysql://117.72.171.122:3306/nacos_config? characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true db.user=root db.password=Sm@1998@1024@Asd db.pool.config.connectionTimeout=${DB_POOL_CONNECTION_TIMEOUT:30000} db.pool.config.validationTimeout=10000 db.pool.config.maximumPoolSize=20 db.pool.config.minimumIdle=2 #*************** Metrics Related Configurations ***************# ### Metrics for prometheus management.endpoints.web.exposure.include=prometheus ### Metrics for elastic search management.metrics.export.elastic.enabled=false #management.metrics.export.elastic.host=http://localhost:9200 ### Metrics for influx management.metrics.export.influx.enabled=false #management.metrics.export.influx.db=springboot #management.metrics.export.influx.uri=http://localhost:8086 #management.metrics.export.influx.auto-create-db=true #management.metrics.export.influx.consistency=one #management.metrics.export.influx.compressed=true #*************** Core Related Configurations ***************# ### set the WorkerID manually # nacos.core.snowflake.worker-id= ### Member-MetaData # nacos.core.member.meta.site= # nacos.core.member.meta.adweight= # nacos.core.member.meta.weight= ### MemberLookup ### Addressing pattern category, If set, the priority is highest # nacos.core.member.lookup.type=[file,address-server] ## Set the cluster list with a configuration file or command-line argument # nacos.member.list=192.168.16.101:8847?raft_port=8807,192.168.16.101?raft_port=8808,192.168.16.101:8849?raft_port=8809 ## for AddressServerMemberLookup # Maximum number of retries to query the address server upon initialization # nacos.core.address-server.retry=5 ## Server domain name address of [address-server] mode # address.server.domain=jmenv.tbsite.net ## Server port of [address-server] mode # address.server.port=8080 ## Request address of [address-server] mode # address.server.url=/nacos/serverlist #*************** JRaft Related Configurations ***************# ### Sets the Raft cluster election timeout, default value is 5 second # nacos.core.protocol.raft.data.election_timeout_ms=5000 ### Sets the amount of time the Raft snapshot will execute periodically, default is 30 minute # nacos.core.protocol.raft.data.snapshot_interval_secs=30 ### raft internal worker threads # nacos.core.protocol.raft.data.core_thread_num=8 ### Number of threads required for raft business request processing # nacos.core.protocol.raft.data.cli_service_thread_num=4 ### raft linear read strategy. Safe linear reads are used by default, that is, the Leader tenure is confirmed by heartbeat # nacos.core.protocol.raft.data.read_index_type=ReadOnlySafe ### rpc request timeout, default 5 seconds # nacos.core.protocol.raft.data.rpc_request_timeout_ms=5000 ### enable to support prometheus service discovery #nacos.prometheus.metrics.enabled=true #*************** Distro Related Configurations ***************# ### Distro data sync delay time, when sync task delayed, task will be merged for same data key. Default 1 second. # nacos.core.protocol.distro.data.sync.delayMs=1000 ### Distro data sync timeout for one sync data, default 3 seconds. # nacos.core.protocol.distro.data.sync.timeoutMs=3000 ### Distro data sync retry delay time when sync data failed or timeout, same behavior with delayMs, default 3 seconds. # nacos.core.protocol.distro.data.sync.retryDelayMs=3000 ### Distro data verify interval time, verify synced data whether expired for a interval. Default 5 seconds. # nacos.core.protocol.distro.data.verify.intervalMs=5000 ### Distro data verify timeout for one verify, default 3 seconds. # nacos.core.protocol.distro.data.verify.timeoutMs=3000 ### Distro data load retry delay when load snapshot data failed, default 30 seconds. # nacos.core.protocol.distro.data.load.retryDelayMs=30000 ### enable to support prometheus service discovery #nacos.prometheus.metrics.enabled=true #*************** Grpc Configurations ***************# ### Sets the maximum message size allowed to be received on the server. #nacos.remote.server.grpc.sdk.max-inbound-message-size=10485760 ### Sets the time(milliseconds) without read activity before sending a keepalive ping. The typical default is two hours. #nacos.remote.server.grpc.sdk.keep-alive-time=7200000 ### Sets a time(milliseconds) waiting for read activity after sending a keepalive ping. Defaults to 20 seconds. #nacos.remote.server.grpc.sdk.keep-alive-timeout=20000 ### Sets a time(milliseconds) that specify the most aggressive keep-alive time clients are permitted to configure. The typical default is 5 minutes #nacos.remote.server.grpc.sdk.permit-keep-alive-time=300000 ### cluster grpc(inside the nacos server) configuration #nacos.remote.server.grpc.cluster.max-inbound-message-size=10485760 ### Sets the time(milliseconds) without read activity before sending a keepalive ping. The typical default is two hours. #nacos.remote.server.grpc.cluster.keep-alive-time=7200000 ### Sets a time(milliseconds) waiting for read activity after sending a keepalive ping. Defaults to 20 seconds. #nacos.remote.server.grpc.cluster.keep-alive-timeout=20000 ### Sets a time(milliseconds) that specify the most aggressive keep-alive time clients are permitted to configure. The typical default is 5 minutes #nacos.remote.server.grpc.cluster.permit-keep-alive-time=300000 #*************** Config Module Related Configurations ***************# ### the maximum retry times for push nacos.config.push.maxRetryTime=50 #*************** Naming Module Related Configurations ***************# ### Data dispatch task execution period in milliseconds: ### If enable data warmup. If set to false, the server would accept request without local data preparation: nacos.naming.data.warmup=true ### If enable the instance auto expiration, kind like of health check of instance: # nacos.naming.expireInstance=true nacos.naming.empty-service.auto-clean=true nacos.naming.empty-service.clean.initial-delay-ms=50000 nacos.naming.empty-service.clean.period-time-ms=30000 #--------------- Nacos Web Server Configurations ---------------# #*************** Nacos Web Server Related Configurations ***************# ### Nacos Server Web context path: nacos.server.contextPath=${SERVER_SERVLET_CONTEXTPATH:/nacos} #*************** Access Log Related Configurations ***************# ### If turn on the access log: server.tomcat.accesslog.enabled=true ### accesslog automatic cleaning time server.tomcat.accesslog.max-days=30 ### The access log pattern: server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i ### The directory of access log: server.tomcat.basedir=file:. #*************** API Related Configurations ***************# ### Include message field server.error.include-message=ALWAYS ### Enabled for open API compatibility # nacos.core.api.compatibility.client.enabled=true ### Enabled for admin API compatibility # nacos.core.api.compatibility.admin.enabled=false ### Enabled for console API compatibility # nacos.core.api.compatibility.console.enabled=false #--------------- Nacos Console Configurations ---------------# #*************** Nacos Console Related Configurations ***************# ### Nacos Console Main port nacos.console.port=${NACOS_CONSOLE_PORT:8080} ### Nacos Server Web context path: nacos.console.contextPath=${NACOS_CONSOLE_CONTEXTPATH:} ### Nacos Server context path, which link to nacos server `nacos.server.contextPath`, works when deployment type is `console` nacos.console.remote.server.context-path=${SERVER_SERVLET_CONTEXTPATH:/nacos} #************** Console UI Configuration ***************# ### Turn on/off the nacos console ui. nacos.console.ui.enabled=true #--------------- Nacos Plugin Configurations ---------------# #*************** CMDB Plugin Related Configurations ***************# ### The interval to dump external CMDB in seconds: # nacos.cmdb.dumpTaskInterval=3600 ### The interval of polling data change event in seconds: # nacos.cmdb.eventTaskInterval=10 ### The interval of loading labels in seconds: # nacos.cmdb.labelTaskInterval=300 ### If turn on data loading task: # nacos.cmdb.loadDataAtStart=false #*************** Auth Plugin Related Configurations ***************# ### The ignore urls of auth, will be deprecated in the future: nacos.security.ignore.urls=${NACOS_SECURITY_IGNORE_URLS:/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**} ### The auth system to use, default 'nacos' and 'ldap' is supported, other type should be implemented by yourself: nacos.core.auth.system.type=${NACOS_AUTH_SYSTEM_TYPE:nacos} ### If turn on auth system: # Whether open nacos server API auth system nacos.core.auth.enabled=true # Whether open nacos admin API auth system nacos.core.auth.admin.enabled=true # Whether open nacos console API auth system nacos.core.auth.console.enabled=true ### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay. nacos.core.auth.caching.enabled=${NACOS_AUTH_CACHE_ENABLE:false} ### worked when nacos.core.auth.enabled=true ### The two properties is the white list for auth and used by identity the request from other server. nacos.core.auth.server.identity.key=${NACOS_AUTH_IDENTITY_KEY:admin} nacos.core.auth.server.identity.value=${NACOS_AUTH_IDENTITY_VALUE:admin} ### worked when nacos.core.auth.system.type=nacos or nacos.core.auth.console.enabled=true ### The token expiration in seconds: nacos.core.auth.plugin.nacos.token.cache.enable=false nacos.core.auth.plugin.nacos.token.expire.seconds=${NACOS_AUTH_TOKEN_EXPIRE_SECONDS:18000} ### The default token (Base64 string): #nacos.core.auth.plugin.nacos.token.secret.key=VGhpc0lzTXlDdXN0b21TZWNyZXRLZXkwMTIzNDU2Nzg= nacos.core.auth.plugin.nacos.token.secret.key=${NACOS_AUTH_TOKEN:SecretKey01234567890123456789012345345678999987654901234567890123456789} nacos.core.auth.enable.userAgentAuthWhite=false ### worked when nacos.core.auth.system.type=ldap?{0} is Placeholder,replace login username #nacos.core.auth.ldap.url=ldap://localhost:389 #nacos.core.auth.ldap.basedc=dc=example,dc=org #nacos.core.auth.ldap.userDn=cn=admin,${nacos.core.auth.ldap.basedc} #nacos.core.auth.ldap.password=admin #nacos.core.auth.ldap.userdn=cn={0},dc=example,dc=org #nacos.core.auth.ldap.filter.prefix=uid #nacos.core.auth.ldap.case.sensitive=true #nacos.core.auth.ldap.ignore.partial.result.exception=false #*************** Control Plugin Related Configurations ***************# # plugin type #nacos.plugin.control.manager.type=nacos # local control rule storage dir, default ${nacos.home}/data/connection and ${nacos.home}/data/tps #nacos.plugin.control.rule.local.basedir=${nacos.home} # external control rule storage type, if exist #nacos.plugin.control.rule.external.storage= #*************** Config Change Plugin Related Configurations ***************# # webhook #nacos.core.config.plugin.webhook.enabled=false # It is recommended to use EB https://help.aliyun.com/document_detail/413974.html #nacos.core.config.plugin.webhook.url=http://localhost:8080/webhook/send?token=*** # The content push max capacity ,byte #nacos.core.config.plugin.webhook.contentMaxCapacity=102400 # whitelist #nacos.core.config.plugin.whitelist.enabled=false # The import file suffixs #nacos.core.config.plugin.whitelist.suffixs=xml,text,properties,yaml,html # fileformatcheck,which validate the import file of type and content #nacos.core.config.plugin.fileformatcheck.enabled=false #*************** Istio Plugin Related Configurations ***************# ### If turn on the MCP server: nacos.istio.mcp.server.enabled=false #--------------- Nacos Experimental Features Configurations ---------------# #*************** K8s Related Configurations ***************# ### If turn on the K8s sync: nacos.k8s.sync.enabled=false ### If use the Java API from an application outside a kubernetes cluster #nacos.k8s.sync.outsideCluster=false #nacos.k8s.sync.kubeConfig=/.kube/config #*************** Deployment Type Configuration ***************# ### Sets the deployment type: 'merged' for joint deployment, 'server' for separate deployment server only, 'console' for separate deployment console only. nacos.deployment.type=merged
最新发布
10-20
### Qt Quick Controls 2 Plugin DLL 缺失或无法加载解决方案 当遇到 `qtquickcontrols2plugin.dll` 缺失或无法到的问题时,这通常表明某些依赖项未正确配置或缺失。以下是一些可能的原因及解决方法: #### 1. 检查插件路径 Qt 使用特定的目录结构来查其插件。如果插件路径不正确,可能会导致 `qtquickcontrols2plugin.dll` 无法被加载。确保该插件位于 Qt 的插件目录中,例如: ```plaintext <Qt Installation Path>/plugins/quickcontrols2/ ``` 将 `qtquickcontrols2plugin.dll` 放置在上述路径下,并确保应用程序能够访问此路径[^1]。 #### 2. 验证依赖项完整性 使用工具如 Dependency Walker 或者 `dumpbin /DEPENDENTS`(Windows)检查 `qtquickcontrols2plugin.dll` 的依赖项是否完整。如果缺少某些依赖项(例如 `libEGL.dll` 或其他 Qt 相关库),则需要将其添加到应用程序的运行时环境中[^1]。 对于非 OpenGL 版本的 MSVC 部署,注意以下事项: - `libEGL.dll` 文件不应与 `qwindows.dll` 放在同一目录下,而是应与其他 Qt DLL 文件一起放置。 - 确保操作系统能够正确加载这些依赖项[^1]。 #### 3. 配置 Qt 插件加载路径 如果应用程序未能自动检测到插件路径,可以通过设置环境变量或代码显式指定插件路径。例如,在 Windows 上可以设置以下环境变量: ```plaintext QT_PLUGIN_PATH=<path_to_your_plugins> ``` 或者通过代码实现: ```cpp #include <QCoreApplication> #include <QtPlugin> int main(int argc, char *argv[]) { QCoreApplication app(argc, argv); // 设置插件路径 QStringList paths = QCoreApplication::libraryPaths(); paths.append("C:/path/to/plugins"); QCoreApplication::setLibraryPaths(paths); return app.exec(); } ``` #### 4. 检查 Qt 版本和模块安装 确保已安装正确的 Qt 模块。对于 Qt Quick Controls 2,需要确保以下模块已正确安装: - `QtQuick` - `QtQuickControls2` - `QtGraphicalEffects` 可以通过以下命令验证模块是否存在: ```bash qmake --version ``` 同时,检查项目的 `.pro` 文件是否包含必要的模块声明: ```plaintext QT += quick quickcontrols2 ``` #### 5. 使用静态链接(可选) 如果动态链接始终出现问题,可以考虑将 Qt 静态链接到应用程序中。这样可以避免许多与插件和依赖项相关的问题。然而,静态链接会显著增加应用程序的大小,并且需要重新编译 Qt 库。 --- ### 示例代码:手动加载插件 如果插件路径仍然存在问题,可以通过代码手动加载插件: ```cpp #include <QGuiApplication> #include <QQmlApplicationEngine> #include <QtPlugin> Q_IMPORT_PLUGIN(QtQuickControls2Plugin) int main(int argc, char *argv[]) { QGuiApplication app(argc, argv); QQmlApplicationEngine engine; const QUrl url(u"qrc:/main.qml"_qs); QObject::connect(&engine, &QQmlApplicationEngine::objectCreated, &app, [url](QObject *obj, const QUrl &objUrl) { if (!obj && url == objUrl) QCoreApplication::exit(-1); }, Qt::QueuedConnection); engine.load(url); return app.exec(); } ``` --- ### 总结 `qtquickcontrols2plugin.dll` 缺失或无法加载的问题通常是由于插件路径错误、依赖项缺失或模块未正确安装引起的。通过检查插件路径、验证依赖项完整性、配置插件加载路径以及确保模块安装正确,可以有效解决此类问题。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值