解决 oracle EXP-00091: Exporting questionable statistics. 问题

本文解决Oracle环境下因环境变量语言集不匹配导致的统计导出问题,通过调整客户端与数据库服务器的字符集匹配,确保导出统计过程顺利进行。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

EXP-00091 Exporting questionable statistics
  
Cause: Export was able to export statistics, but the statistics may not be useable. The statistics are questionable because one or more of the following happened during export: a row error occurred, client character set or NCHARSET does not match with the server, a query clause was specified on export, only certain partitions or subpartitions were exported, or a fatal error occurred while processing a table.
  
Action: To export non-questionable statistics, change the client character set or NCHARSET to match the server, export with no query clause, or export complete tables. If desired, import parameters can be supplied so that only non-questionable statistics will be imported, and all questionable statistics will be recalculated.

解决:
linux下的oracle用户的环境变量语言集和oralce数据库中的环境变量语言集不相同。
查看oracle的环境变量语言集:
数据库服务器字符集:
select * from nls_database_parameters
客户端字符集:
select * from nls_instance_parameters

将oracle用户的环境变量语言集改成和数据库服务器字符集一样。
 export NLS_LANG=american_america.ZHS16GBK
(gguf-env-video) ubuntu@ubuntu-Lenovo-Product:~/llama.cpp$ python convert_hf_to_gguf.py /home/ubuntu/.cache/modelscope/hub/models/iic/QwenLong-L1-32B-AWQ --outtype f16 --outfile qwenlong-32b.f16.gguf INFO:hf-to-gguf:Loading model: QwenLong-L1-32B-AWQ INFO:hf-to-gguf:Model architecture: Qwen2ForCausalLM INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only INFO:hf-to-gguf:Exporting model... INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json' INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00004.safetensors' INFO:hf-to-gguf:token_embd.weight, torch.float16 --> F16, shape = {5120, 152064} INFO:hf-to-gguf:blk.0.attn_norm.weight, torch.float16 --> F32, shape = {5120} Traceback (most recent call last): File "/home/ubuntu/llama.cpp/convert_hf_to_gguf.py", line 6560, in <module> main() File "/home/ubuntu/llama.cpp/convert_hf_to_gguf.py", line 6554, in main model_instance.write() File "/home/ubuntu/llama.cpp/convert_hf_to_gguf.py", line 403, in write self.prepare_tensors() File "/home/ubuntu/llama.cpp/convert_hf_to_gguf.py", line 277, in prepare_tensors for new_name, data_torch in (self.modify_tensors(data_torch, name, bid)): File "/home/ubuntu/llama.cpp/convert_hf_to_gguf.py", line 2730, in modify_tensors yield from super().modify_tensors(data_torch, name, bid) File "/home/ubuntu/llama.cpp/convert_hf_to_gguf.py", line 245, in modify_tensors return [(self.map_tensor_name(name), data_torch)] File "/home/ubuntu/llama.cpp/convert_hf_to_gguf.py", line 236, in map_tensor_name raise ValueError(f"Can not map tensor {name!r}") ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.qweight'
最新发布
07-03
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值