Pytorch部署:用C++调用Pytorch生成的模型

这篇博客介绍了如何将PyTorch模型转换为Torch Script,然后在C++环境中进行序列化、加载和执行。通过torch.jit.trace或torch.jit.script将模型转化为Torch Script,保存为pt文件,再利用torch::jit::load在C++中加载并执行模型。执行模型时,需要依赖libtorch库。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Loading a TorchScript Model in C++

The following paragraphs will outline the path PyTorch provides to go from an existing Python model to a serialized representation that can be loaded and executed purely from C++, with no dependency on Python.

A PyTorch model’s journey from Python to C++ is enabled by Torch Script, a representation of a PyTorch model that can be understood, compiled and serialized by the Torch Script compiler.

步骤:

  • Step 1: Converting Your PyTorch Model to Torch Script
  • Step 2: Serializing Your Script Module to a File
  • Step 3: Loading Your Script Module in C++
  • Step 4: Executing the Script Module in C++
  • Step 5: Getting Help and Exploring the API

 

总结:

先生成torch script,这个script对象能被torch script compiler理解编译序列化。

通过:

traced_script_module = torch.jit.trace(model, example)

my_module = MyModule(10,20)

sm = torch.jit.script(my_module)

方法。

然后serlizing,变成pt文件that can be loaded and executed purely from C++

traced_script_module.save("traced_resnet_model.pt")

然后C++中load它,

module = torch::jit::load(argv[1]);

输入输出

std::vector<torch::jit::IValue> inputs;

inputs.push_back(torch::ones({1, 3, 224, 224}));

// Execute the model and turn its output into a tensor.

at::Tensor output = module.forward(inputs).toTensor();

std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n';

注意:用C++使用pt模型的时候要依赖于libtorch

 

参考文献

https://pytorch.org/tutorials/advanced/cpp_export.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值