遇到的Not all named parameters have been set: [3, 5, 4]

org.hibernate.QueryException: Not all named parameters have been set: [3, 5, 4] [select o from ProductInfo o  where o.visible=?1 and  o.type.typeid in(?2,?3,?4,?5) order by o.createDate desc]

 

上次写错的,导致出现Not all named parameters have been set: [3, 5, 4]
   * 这个错误,参数没设全.要把params.add(typeids);修改成params.addAll(typeids);

 

zl@zl-virtual-machine:~/catkin_ws$ roslaunch gazebo_pkg race.launch ... logging to /home/zl/.ros/log/a85ae268-5ca4-11f0-b935-318c9b5216ae/roslaunch-zl-virtual-machine-3453.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. xacro: in-order processing became default in ROS Melodic. You can drop the option. started roslaunch server http://127.0.0.1:33879/ SUMMARY ======== PARAMETERS * /gazebo/enable_ros_network: True * /robot_description: <?xml version="1.... * /rosdistro: noetic * /rosversion: 1.17.4 * /use_sim_time: True NODES / gazebo (gazebo_ros/gzserver) gazebo_gui (gazebo_ros/gzclient) joint_state_publisher (joint_state_publisher/joint_state_publisher) robot_state_publisher (robot_state_publisher/robot_state_publisher) urdf_spawner (gazebo_ros/spawn_model) ROS_MASTER_URI=http://127.0.0.1:11311 process[gazebo-1]: started with pid [3468] process[gazebo_gui-2]: started with pid [3471] process[urdf_spawner-3]: started with pid [3476] process[joint_state_publisher-4]: started with pid [3479] process[robot_state_publisher-5]: started with pid [3480] Traceback (most recent call last): File "/opt/ros/noetic/lib/gazebo_ros/spawn_model", line 20, in <module> import rospy File "/opt/ros/noetic/lib/python3/dist-packages/rospy/__init__.py", line 49, in <module> from .client import spin, myargv, init_node, \ File "/opt/ros/noetic/lib/python3/dist-packages/rospy/client.py", line 52, in <module> import roslib File "/opt/ros/noetic/lib/python3/dist-packages/roslib/__init__.py", line 50, in <module> from roslib.launcher import load_manifest # noqa: F401 File "/opt/ros/noetic/lib/python3/dist-packages/roslib/launcher.py", line 42, in <module> import rospkg ModuleNotFoundError: No module named &#39;rospkg&#39; Traceback (most recent call last): File "/opt/ros/noetic/lib/joint_state_publisher/joint_state_publisher", line 35, in <module> import rospy File "/opt/ros/noetic/lib/python3/dist-packages/rospy/__init__.py", line 49, in <module> from .client import spin, myargv, init_node, \ File "/opt/ros/noetic/lib/python3/dist-packages/rospy/client.py", line 52, in <module> import roslib File "/opt/ros/noetic/lib/python3/dist-packages/roslib/__init__.py", line 50, in <module> from roslib.launcher import load_manifest # noqa: F401 File "/opt/ros/noetic/lib/python3/dist-packages/roslib/launcher.py", line 42, in <module> import rospkg ModuleNotFoundError: No module named &#39;rospkg&#39; [urdf_spawner-3] process has died [pid 3476, exit code 1, cmd /opt/ros/noetic/lib/gazebo_ros/spawn_model -urdf -model robot -param robot_description -z 0.05 gazebo/scan:=/scan __name:=urdf_spawner __log:=/home/zl/.ros/log/a85ae268-5ca4-11f0-b935-318c9b5216ae/urdf_spawner-3.log]. log file: /home/zl/.ros/log/a85ae268-5ca4-11f0-b935-318c9b5216ae/urdf_spawner-3*.log [joint_state_publisher-4] process has died [pid 3479, exit code 1, cmd /opt/ros/noetic/lib/joint_state_publisher/joint_state_publisher gazebo/scan:=/scan __name:=joint_state_publisher __log:=/home/zl/.ros/log/a85ae268-5ca4-11f0-b935-318c9b5216ae/joint_state_publisher-4.log]. log file: /home/zl/.ros/log/a85ae268-5ca4-11f0-b935-318c9b5216ae/joint_state_publisher-4*.log [gazebo-1] process has died [pid 3468, exit code 255, cmd /opt/ros/noetic/lib/gazebo_ros/gzserver -e ode /home/zl/catkin_ws/src/gazebo_pkg/world/race_with_board.world gazebo/scan:=/scan __name:=gazebo __log:=/home/zl/.ros/log/a85ae268-5ca4-11f0-b935-318c9b5216ae/gazebo-1.log]. log file: /home/zl/.ros/log/a85ae268-5ca4-11f0-b935-318c9b5216ae/gazebo-1*.log [INFO] [1752052283.751006243]: Finished loading Gazebo ROS API Plugin. [INFO] [1752052283.751633811]: waitForService: Service [/gazebo_gui/set_physics_properties] has not been advertised, waiting... Couldn&#39;t find a preferred IP via the getifaddrs() call; I&#39;m assuming that your IP address is 127.0.0.1. This should work for local processes, but will almost certainly not work if you have remote processes.Report to the disc-zmq development team to seek a fix. Couldn&#39;t find a preferred IP via the getifaddrs() call; I&#39;m assuming that your IP address is 127.0.0.1. This should work for local processes, but will almost certainly not work if you have remote processes.Report to the disc-zmq development team to seek a fix. Couldn&#39;t find a preferred IP via the getifaddrs() call; I&#39;m assuming that your IP address is 127.0.0.1. This should work for local processes, but will almost certainly not work if you have remote processes.Report to the disc-zmq development team to seek a fix. Couldn&#39;t find a preferred IP via the getifaddrs() call; I&#39;m assuming that your IP address is 127.0.0.1. This should work for local processes, but will almost certainly not work if you have remote processes.Report to the disc-zmq development team to seek a fix. libcurl: (6) Could not resolve host: fuel.ignitionrobotics.org libcurl: (6) Could not resolve host: fuel.gazebosim.org
最新发布
07-11
/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/utils/hub.py:105: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( The following values were not passed to `accelerate launch` and had defaults used instead: More than one GPU was found, enabling multi-GPU training. If this was unintended please pass in `--num_processes=1`. `--num_machines` was set to a value of `1` `--mixed_precision` was set to a value of `&#39;no&#39;` `--dynamo_backend` was set to a value of `&#39;no&#39;` To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. /home/ustc/anaconda3/lib/python3.12/site-packages/pydub/utils.py:170: RuntimeWarning: Couldn&#39;t find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn("Couldn&#39;t find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning) /home/ustc/anaconda3/lib/python3.12/site-packages/torch/nn/utils/weight_norm.py:134: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`. WeightNorm.apply(module, name, dim) /home/ustc/anaconda3/lib/python3.12/site-packages/transformers/utils/hub.py:105: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( [rank0]: Traceback (most recent call last): [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 92, in _call_target [rank0]: return _target_(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/桌面/seed-vc-main-v2/modules/astral_quantization/default_model.py", line 22, in __init__ [rank0]: self.tokenizer = WhisperProcessor.from_pretrained( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/processing_utils.py", line 1079, in from_pretrained [rank0]: args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/processing_utils.py", line 1143, in _get_arguments_from_pretrained [rank0]: args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs)) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/feature_extraction_utils.py", line 384, in from_pretrained [rank0]: feature_extractor_dict, kwargs = cls.get_feature_extractor_dict(pretrained_model_name_or_path, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/feature_extraction_utils.py", line 510, in get_feature_extractor_dict [rank0]: resolved_feature_extractor_file = cached_file( [rank0]: ^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/utils/hub.py", line 266, in cached_file [rank0]: file = cached_files(path_or_repo_id=path_or_repo_id, filenames=[filename], **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/transformers/utils/hub.py", line 381, in cached_files [rank0]: raise OSError( [rank0]: OSError: ./checkpoints/hf_cache does not appear to have a file named preprocessor_config.json. Checkout &#39;https://huggingface.co/./checkpoints/hf_cache/tree/main&#39; for available files. [rank0]: The above exception was the direct cause of the following exception: [rank0]: Traceback (most recent call last): [rank0]: File "/home/ustc/桌面/seed-vc-main-v2/train_v2.py", line 345, in <module> [rank0]: main(args) [rank0]: File "/home/ustc/桌面/seed-vc-main-v2/train_v2.py", line 314, in main [rank0]: trainer = Trainer( [rank0]: ^^^^^^^^ [rank0]: File "/home/ustc/桌面/seed-vc-main-v2/train_v2.py", line 75, in __init__ [rank0]: self._init_models(train_cfm=train_cfm, train_ar=train_ar) [rank0]: File "/home/ustc/桌面/seed-vc-main-v2/train_v2.py", line 106, in _init_models [rank0]: self._init_main_model(train_cfm=train_cfm, train_ar=train_ar) [rank0]: File "/home/ustc/桌面/seed-vc-main-v2/train_v2.py", line 116, in _init_main_model [rank0]: self.model = hydra.utils.instantiate(cfg).to(self.device) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 226, in instantiate [rank0]: return instantiate_node( [rank0]: ^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 342, in instantiate_node [rank0]: value = instantiate_node( [rank0]: ^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 347, in instantiate_node [rank0]: return _call_target(_target_, partial, args, kwargs, full_key) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/ustc/anaconda3/lib/python3.12/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 97, in _call_target [rank0]: raise InstantiationException(msg) from e [rank0]: hydra.errors.InstantiationException: Error in call to target &#39;modules.astral_quantization.default_model.AstralQuantizer&#39;: [rank0]: OSError("./checkpoints/hf_cache does not appear to have a file named preprocessor_config.json. Checkout &#39;https://huggingface.co/./checkpoints/hf_cache/tree/main&#39; for available files.") [rank0]: full_key: content_extractor_narrow [rank0]:[W514 21:22:23.005946238 ProcessGroupNCCL.cpp:1168] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator()) W0514 21:22:24.285000 132883023812096 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 3125405 closing signal SIGTERM W0514 21:22:24.285000 132883023812096 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 3125406 closing signal SIGTERM W0514 21:22:24.285000 132883023812096 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 3125407 closing signal SIGTERM E0514 21:22:24.500000 132883023812096 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 3125404) of binary: /home/ustc/anaconda3/bin/python Traceback (most recent call last): File "/home/ustc/anaconda3/bin/accelerate", line 8, in <module> sys.exit(main()) ^^^^^^ File "/home/ustc/anaconda3/lib/python3.12/site-packages/accelerate/commands/accelerate_cli.py", line 50, in main args.func(args) File "/home/ustc/anaconda3/lib/python3.12/site-packages/accelerate/commands/launch.py", line 1204, in launch_command multi_gpu_launcher(args) File "/home/ustc/anaconda3/lib/python3.12/site-packages/accelerate/commands/launch.py", line 825, in multi_gpu_launcher distrib_run.run(args) File "/home/ustc/anaconda3/lib/python3.12/site-packages/torch/distributed/run.py", line 892, in run elastic_launch( File "/home/ustc/anaconda3/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 133, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ustc/anaconda3/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ train_v2.py FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2025-05-14_21:22:24 host : ustc-SYS-740GP-TNRT rank : 0 (local_rank: 0) exitcode : 1 (pid: 3125404) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ 分析原因
05-15
### 解决 Transformers 和 accelerate 中的警告和错误 在使用 `Transformers` 和 `accelerate` 库时,可能会遇到多种警告和错误。以下是针对提到的具体问题的解决方案: #### 1. **TRANSFORMERS_CACHE 被废弃** `TRANSFORMERS_CACHE` 已被标记为过期 (deprecated),建议改用环境变量 `HF_HOME` 来指定缓存路径[^1]。可以通过设置以下环境变量来替代旧的行为: ```bash export HF_HOME=/path/to/cache/directory ``` 如果需要在 Python 脚本中动态设置此变量,则可以这样做: ```python import os os.environ["HF_HOME"] = "/path/to/cache/directory" ``` #### 2. **缺少 ffmpeg 或 avconv 导致的错误** 某些模型依赖于多媒体处理工具(如 `ffmpeg` 或 `avconv`)。当这些工具不可用时,运行代码可能抛出 `OSError`。要解决这个问题,请先确认已安装所需的工具。 对于基于 Linux 的系统,可通过包管理器安装 `ffmpeg`: ```bash sudo apt update && sudo apt install ffmpeg ``` 或者,在 macOS 上通过 Homebrew 安装: ```bash brew install ffmpeg ``` 验证安装成功后,重新启动脚本即可消除相关错误。 #### 3. **weight_norm 报废提示** `torch.nn.utils.weight_norm` 函数已被标记为弃用,并将在未来的 PyTorch 版本中移除。推荐替换为更现代的方法——例如直接初始化权重矩阵并应用正则化技术。具体来说,可以在定义网络层时显式地实现类似的规范化逻辑。 假设您正在迁移现有代码库中的某一层结构,可以用如下方式重构它: ```python import torch.nn as nn class CustomLayer(nn.Module): def __init__(self, input_dim, output_dim): super(CustomLayer, self).__init__() self.linear = nn.Linear(input_dim, output_dim) def forward(self, x): return self.linear(x) ``` 这样不仅消除了警告消息,还提高了代码可维护性和性能表现。 #### 4. **加载 preprocessor_config.json 文件失败** 此类问题是由于目标目录下的必要文件丢失引起的。通常情况下,这涉及到预处理器配置文件 (`preprocessor_config.json`) 缺失的情况。为了修复这一情况,请确保所有必需资源均已正确下载至本地存储位置。 以 Florence 模型为例,其正常工作至少需要两个核心组件:`pytorch_model.bin` 和 `config.json` 存在于同一父级文件夹内[^1]。因此,请仔细核对您的 models 目录树是否完整无缺;若有任何遗漏项,则需补充相应数据集链接地址所提供的全部资产副本。 最后一步操作指南如下所示: ```bash mkdir -p /your/path/models/Florence/ wget https://example.com/pytorch_model.bin -P /your/path/models/Florence/ wget https://example.com/config.json -P /your/path/models/Florence/ ``` 完成以上步骤之后再次尝试执行程序应该能够顺利解决问题。 --- ### 总结 综上所述,分别采取措施更新环境变量设定、补全外部依赖软件包、调整陈旧 API 呼叫形式以及保障资料齐全这几方面入手,便能有效应对当前所面临的一系列挑战。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值