动态调试SO之在.init_array段下断点

本文详细介绍了如何使用动态调试技术在.so文件的.init_array段设置断点,通过步骤演示了从安装环境到实际调试的全过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

动态调试SO之在.init_array段下断点

                                                                                                                                                                                                       图/文 0n1y3nd

前言

       由前面的分析可以知道,so被加载之后最开始执行的是init_array段的代码。然后才会去执行jni_onload
       那么,在.init_array处断下来便是很有必要的

前期准备

      android系统中位于/system/bin/的linker
      ida 6.4
      android_server
      mytestcm.apk

准备工作做好之后,下面正式开始调试

0×1

push android_server到tmp目录下,给权限,然后以root身份执行。
adb forward端口转发
IDA attach对应进程

0×2

新开一个IDA,载入linker
Shift+F12打开字符串窗口,搜索字符串:dlopen


定位到:



找到dlopen函数的偏移0xF30
动态调试的IDA中,G跳转到:400D3000+F30=400D3F30处,下好断点


搜索字符串:calling


定位到:


得到偏移:0×2720
来到动态调试的ida,G跳转到:400D3000+2720=400D5720处,下断点


往下:


400D574C   BLX   R4处就是调用init段代码的地方!直接跟进就能看到

0×3

按F9运行
然后打开Eclipse或者ddms
执行 jdb -connect com.sun.jdi.SocketAttach:hostname=127.0.0.1,port=8700
程序就会断在第一个断点处,F9几次就段在blx R4处
F7跟进就来到init段代码处:


``` [root@190f3c453709 inference]# python nf4.py /usr/local/python3.10.17/lib/python3.10/site-packages/torch_npu/utils/storage.py:38: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() if self.device.type != 'cpu': Some weights of PanguForCausalLM were not initialized from the model checkpoint at /models/z50051264/checkpoints and are newly initialized: ['model.layers.0.self_attn.rotary_emb.inv_freq', 'model.layers.1.self_attn.rotary_emb.inv_freq', 'model.layers.10.self_attn.rotary_emb.inv_freq', 'model.layers.11.self_attn.rotary_emb.inv_freq', 'model.layers.12.self_attn.rotary_emb.inv_freq', 'model.layers.13.self_attn.rotary_emb.inv_freq', 'model.layers.14.self_attn.rotary_emb.inv_freq', 'model.layers.15.self_attn.rotary_emb.inv_freq', 'model.layers.16.self_attn.rotary_emb.inv_freq', 'model.layers.17.self_attn.rotary_emb.inv_freq', 'model.layers.18.self_attn.rotary_emb.inv_freq', 'model.layers.19.self_attn.rotary_emb.inv_freq', 'model.layers.2.self_attn.rotary_emb.inv_freq', 'model.layers.20.self_attn.rotary_emb.inv_freq', 'model.layers.21.self_attn.rotary_emb.inv_freq', 'model.layers.22.self_attn.rotary_emb.inv_freq', 'model.layers.23.self_attn.rotary_emb.inv_freq', 'model.layers.24.self_attn.rotary_emb.inv_freq', 'model.layers.25.self_attn.rotary_emb.inv_freq', 'model.layers.26.self_attn.rotary_emb.inv_freq', 'model.layers.27.self_attn.rotary_emb.inv_freq', 'model.layers.3.self_attn.rotary_emb.inv_freq', 'model.layers.4.self_attn.rotary_emb.inv_freq', 'model.layers.5.self_attn.rotary_emb.inv_freq', 'model.layers.6.self_attn.rotary_emb.inv_freq', 'model.layers.7.self_attn.rotary_emb.inv_freq', 'model.layers.8.self_attn.rotary_emb.inv_freq', 'model.layers.9.self_attn.rotary_emb.inv_freq'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. *****************模型加载成功! ****************[+] load time: 27.4145s 模型词汇量: 153376 Tokenizer词汇量: 153376 unk_token: <unk> pad_token: None <s>I love Hugging Face! *****************分词器加载成功,开始推理! [+] inference time: 5.57427s [' <s> 你是谁?你要我提供什么类型的内容?\n\n**回答者:人工智能助手\n\n问题有什么可以为我服务的呢?\n?\n在\n?\n\n?\n\n## \n是吗?你是一种智能机器人么 AI, [unused10]'] [root@190f3c453709 inference]# python -m pdb nf4.py > /models/z50051264/bitsandbytes-pangu/examples/inference/nf4.py(1)<module>() -> import time (Pdb) n > /models/z50051264/bitsandbytes-pangu/examples/inference/nf4.py(2)<module>() -> import torch, torch_npu (Pdb) n > /models/z50051264/bitsandbytes-pangu/examples/inference/nf4.py(3)<module>() -> from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig (Pdb) > /models/z50051264/bitsandbytes-pangu/examples/inference/nf4.py(9)<module>() -> MODEL_PATH = "/models/z50051264/checkpoints" (Pdb) > /models/z50051264/bitsandbytes-pangu/examples/inference/nf4.py(11)<module>() -> bnb_config = BitsAndBytesConfig( (Pdb) > /models/z50051264/bitsandbytes-pangu/examples/inference/nf4.py(12)<module>() -> load_in_4bit=True, (Pdb) > /models/z50051264/bitsandbytes-pangu/examples/inference/nf4.py(13)<module>() -> bnb_4bit_compute_dtype=torch.bfloat16, # Support torch.float16, torch.float32, torch.bfloat16 (Pdb) > /models/z50051264/bitsandbytes-pangu/examples/inference/nf4.py(14)<module>() -> bnb_4bit_quant_type="nf4", # # Only support `nf4` (Pdb) > /models/z50051264/bitsandbytes-pangu/examples/inference/nf4.py(15)<module>() -> bnb_4bit_use_double_quant=False (Pdb) > /models/z50051264/bitsandbytes-pangu/examples/inference/nf4.py(11)<module>() -> bnb_config = BitsAndBytesConfig( (Pdb) > /models/z50051264/bitsandbytes-pangu/examples/inference/nf4.py(18)<module>() -> torch.npu.synchronize() (Pdb) RuntimeError: SetPrecisionMode:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:156 NPU function error: at_npu::native::AclSetCompileopt(aclCompileOpt::ACL_PRECISION_MODE, precision_mode), error code is 500001 [ERROR] 2025-07-30-07:29:05 (PID:1957, Device:0, RankID:-1) ERR00100 PTA call acl api failed [Error]: The internal ACL of the system is incorrect. Rectify the fault based on the error information in the ascend log. E90000: [PID: 1957] 2025-07-30-07:29:05.549.359 Compile operator failed, cause: module '__main__' has no attribute '__spec__' File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/common/repository_manager/interface.py", line 33, in cann_kb_init return RouteServer.initialize(**locals()) File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/common/repository_manager/route.py", line 54, in wrapper return func(cls, *args, **kwargs) File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/common/repository_manager/route.py", line 169, in initialize main_mod, main_path = config_main_info() File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/common/repository_manager/utils/common.py", line 37, in config_main_info main_module_name = getattr(main_module.__spec__, "name", None) TraceBack (most recent call last): AOE Failed to call InitCannKB[FUNC:Initialize][FILE:python_adapter_manager.cc][LINE:47] Failed to initialize TeConfigInfo. [GraphOpt][InitializeInner][InitTbeFunc] Failed to init tbe.[FUNC:InitializeTeFusion][FILE:tbe_op_store_adapter.cc][LINE:1889] [GraphOpt][InitializeInner][InitTeFusion]: Failed to initialize TeFusion.[FUNC:InitializeInner][FILE:tbe_op_store_adapter.cc][LINE:1856] [SubGraphOpt][PreCompileOp][InitAdapter] InitializeAdapter adapter [tbe_op_adapter] failed! Ret [4294967295][FUNC:InitializeAdapter][FILE:op_store_adapter_manager.cc][LINE:79] [SubGraphOpt][PreCompileOp][Init] Initialize op store adapter failed, OpsStoreName[tbe-custom].[FUNC:Initialize][FILE:op_store_adapter_manager.cc][LINE:120] [FusionMngr][Init] Op store adapter manager init failed.[FUNC:Initialize][FILE:fusion_manager.cc][LINE:115] PluginManager InvokeAll failed.[FUNC:Initialize][FILE:ops_kernel_manager.cc][LINE:83] OpsManager initialize failed.[FUNC:InnerInitialize][FILE:gelib.cc][LINE:259] GELib::InnerInitialize failed.[FUNC:Initialize][FILE:gelib.cc][LINE:184] GEInitialize failed.[FUNC:GEInitialize][FILE:ge_api.cc][LINE:371] [Initialize][Ge]GEInitialize failed. ge result = 4294967295[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] [Init][Compiler]Init compiler failed[FUNC:ReportInnerError][FILE:log_inner.cpp][LINE:145] [Set][Options]OpCompileProcessor init failed![FUNC:ReportInnerError][FILE:log_inner.cpp][LINE:145] > /models/z50051264/bitsandbytes-pangu/examples/inference/nf4.py(18)<module>() -> torch.npu.synchronize() (Pdb) ``` 为什么我直接运行没问题,但是使用pdb调试就会报错???
最新发布
07-31
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值