C/C++ Pointer Learning ||

本文深入探讨了C/C++中指针的概念及其重要性,并解释了如何使用指针来有效地管理内存资源。文章对比了传值调用与传引用调用的区别,详细介绍了如何通过传递地址来修改原始变量的值,以及如何使用指针进行动态内存分配。

Learned C like 4 years ago on Tan Haoqiang’s weird silly book.
Now I need to have a recap of this and hopefully understand better in C/C++. I don’t like C/C++ honestly, but I have to understand it better to deal with my coursework in COMP6771&6733.

What is a pointer and is pointer a really special component of C/C++? Some times I get confused and thinking pointer in C/C++ is literally a “pointer”. But the truth is, the pointer is just a type of variable like other types such as int, char, …

In C, there is only one way of passing variables to function: call by value; And what “call by value” does is to make copies value of parameters to formal parameters. Changes to the formal param has no effect to outside!

In many cases, we wish to have this outside effect, so what do we do? we copy(pass) not the value of a var but the address as formal param so we can access the original value!

like this:

int foo(int x) {

}

foo(A);

vs

int foo(int* ptr){
…
}

foo(&A);

Assume you what a function to be able to allocate certain length of memory resource e.g and int array. call int * p = (int *) malloc(n*sizeof(int)), but if you do it inside a function?

void allocation(char ** p, int len)
{
    *p = (char*) malloc(sizeof(char)*(len+1));
}

int main ()
{
    int i,n;
    char * buffer;
    allocation(&buffer,i);
    //do else...  
    return 0;
}

false call

Compare the false .
Why use char * p in function declaration? why not simply *p? Recall the definition of call by value, suppose we have a machine, when we create char buffer(ptr), it has a address 100 and the value inside for now is NULL. If we pass ptr instead of &buffer, what happens is that the program creates a copy of ptr called A with type: (int *, value null). After the termination of allocate(), where is that? its gone! so when you check the false calls in debug, the ptr is going to be null.

The correct way however is in the code section, you first create a char * buffer, assuming its address is 100. Then you pass &buffer as param in allocate() call, the program will copy a int** type using &buffer’s value which is 100. In malloc sentence, the malloc returns a (char *) and this will be put in 100 because this is where p is pointing to. What’s p’s address? it is definitely not 100 but we don’t really care.

We also have a way to create a matrix:

/*
 Write a function matrixAllocate that takes two integers, m
 and n and allocate an m by n block of memory.
*/
void matrixAllocate(int *** matrix, int c, int r)
{
    *matrix = (int **) malloc(r*(sizeof(int *)));
    //we allocate row and pass the row0 (also r=0 c=0) 
    //to *matrix. each row is pointed by an int*

    int i = 0;
    for (;i<c;i++)
    {
        *((*matrix)+i) = (int *) malloc(r*(sizeof(int)));
    }
    // we create col here
}
C++ call by reference

In C++, we also have a way to call by reference, note that the default is still call by value.
Look at the this function, void swap(int &x, int &y) is very much different than void swap(int *x, int *y) with call (swap(&a, &b) where a,b are int )but achieves the same result.

void swap(int &x, int &y)
{
   int temp;
   temp = x; /* save the value at address x */
   x = y;    /* put y into x */
   y = temp; /* put x into y */

   return;
}
/*
int a = 100;
int b = 200; 
call
swap(a, b);
*/
<think>好的,我需要帮助用户解决如何在C++中解析高通DLC文件的输出。首先,用户提到了引用[1]和[2],里面提到了一些关于SNPE工具链的内容,比如使用snpe-net-run进行推理,输出结果默认在output目录下,以及UDO相关的环境变量设置。这可能和解析输出有关联。 首先,我应该回顾SNPE(Snapdragon Neural Processing Engine)的基本流程。用户已经生成了DLC(Deep Learning Container)文件,并通过SNPE工具进行了推理,得到输出文件。现在需要在C++中读取和解析这些输出。 接下来,我需要考虑SNPE提供的C++ API。可能涉及到如何加载模型、运行推理以及获取输出张量。根据引用[1],推理结果保存在output目录,可能有raw格式的文件,比如浮点数的二进制数据。解析这些文件需要知道张量的维度、数据类型等信息。 然后,用户可能需要具体的代码示例。我应该回忆SNPE的C++接口,例如使用SNPE::SNPEFactory创建运行时实例,加载DLC,准备输入张量,执行推理,最后获取输出张量。解析输出时,可能需要将输出的二进制数据转换为float数组,并按照正确的形状解析。 另外,引用[2]提到UDO包的相关设置,如果模型使用了自定义操作,可能需要配置UDO路径,这可能影响模型的加载,但在解析输出时可能不需要直接处理,除非输出涉及自定义操作的数据结构。不过用户的问题集中在输出解析,可能暂时不需要深入UDO部分。 还需要考虑输出文件的格式。例如,使用snpe-net-run生成的输出可能是多个文件,每个输出层对应一个文件,文件名可能包含层名称和数据格式(比如.raw)。解析时需要用C++读取这些二进制文件,并按照正确的数据格式(float32、int8等)和维度进行解析。 可能的步骤包括: 1. 使用SNPE的C++ API加载DLC并运行推理,或者在已有输出文件的情况下直接读取输出文件。 2. 确定输出张量的维度、数据类型和顺序(如NHWC或NCHW)。 3. 使用C++文件操作读取二进制数据,转换为适当的数据类型数组。 4. 处理解析后的数据,如进行后处理或结果展示。 需要注意错误处理,比如文件是否存在、数据大小是否与预期一致,以及字节序等问题。此外,SNPE的文档可能有相关的示例代码,特别是处理输出张量的部分,需要参考官方文档或示例。 总结,用户需要的是如何用C++读取和解析SNPE生成的输出文件,可能需要结合SNPE的API和文件操作。我应该分步骤说明,并提供示例代码片段,如如何用C++读取.raw文件,转换为浮点数组,并解释如何根据张量形状处理数据。</think>以下是使用C++解析高通DLC文件输出的分步实现方法: ### 一、环境准备 1. 安装SNPE SDK并配置环境变量 ```bash export SNPE_ROOT=/path/to/snpe-1.xx.xx export LD_LIBRARY_PATH=$SNPE_ROOT/lib:$LD_LIBRARY_PATH ``` 2. 确保已生成DLC文件和输出数据文件(通过`snpe-net-run`生成*.raw文件)[^1] ### 二、C++输出解析实现 #### 步骤1:加载SNPE运行时 ```cpp #include <SNPE/SNPE.hpp> #include <SNPE/SNPEFactory.hpp> auto runtime = zdl::SNPE::SNPEFactory::getRuntimeVersion(); zdl::SNPE::SNPEBuilder snpeBuilder(dlcPath.get()); std::unique_ptr<zdl::SNPE::SNPE> snpe = snpeBuilder.setOutputLayers({}) .setRuntimeProcessor(runtime) .build(); ``` #### 步骤2:读取输出张量 ```cpp // 获取输出张量信息 const zdl::DlSystem::TensorMap& outputTensorMap = snpe->getOutputTensors(); const zdl::DlSystem::TensorMap::const_iterator& begin = outputTensorMap.begin(); // 获取第一个输出张量 auto outputTensor = begin->second; const zdl::DlSystem::Dimension* dims = outputTensor.getShape().getDimensions(); size_t bufSize = outputTensor.getSize(); ``` #### 步骤3:解析RAW文件数据 ```cpp std::ifstream inFile("Result_OutputLayer.raw", std::ios::binary); std::vector<float> outputData; outputData.resize(bufSize); inFile.read(reinterpret_cast<char*>(&outputData[0]), bufSize * sizeof(float)); inFile.close(); ``` #### 步骤4:数据后处理 ```cpp // 示例:解析分类结果 int topClass = -1; float maxProb = -1.0f; for(int i = 0; i < outputData.size(); ++i) { if(outputData[i] > maxProb) { maxProb = outputData[i]; topClass = i; } } std::cout << "Top class index: " << topClass << " with probability: " << maxProb << std::endl; ``` ### 三、关键注意事项 1. **张量维度处理**: - 使用`zdl::DlSystem::TensorShape`获取维度信息 - 典型图像处理网络维度顺序为NHWC(数量-高度-宽度-通道) 2. **数据类型适配**: ```cpp // 处理量化输出时需转换 if(outputTensor.getDataType() == zdl::DlSystem::DataType::DTYPE_FLOAT) { // 浮点处理 } else if(outputTensor.getDataType() == zdl::DlSystem::DataType::DTYPE_UINT8) { // 量化处理需反量化 float outputQuantParams = outputTensor.getEncoding().scale[0]; uint8_t* quantizedData = outputTensor.begin().dataPointer<uint8_t>(); } ``` 3. **多输出处理**: ```cpp for(auto it = outputTensorMap.begin(); it != outputTensorMap.end(); ++it) { const char* name = it->first; auto tensor = it->second; // 处理每个输出层... } ```
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值