python 调用 c numpy 数组_快速转换C / C ++向量到Numpy数组

本文介绍了如何通过SWIG将C++中的std::vector数据高效地转换为Numpy数组,避免了迭代器带来的高昂开销。通过自定义模板函数和类型映射,实现了直接从C++容器复制数据到Numpy数组,大大减少了Python调用的开销。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

I'm using SWIG to glue together some C++ code to Python (2.6), and part of that glue includes a piece of code that converts large fields of data (millions of values) from the C++ side to a Numpy array. The best method I can come up with implements an iterator for the class and then provides a Python method:

def __array__(self, dtype=float):

return np.fromiter(self, dtype, self.size())

The problem is that each iterator next call is very costly, since it has to go through about three or four SWIG wrappers. It takes far too long. I can guarantee that the C++ data are stored contiguously (since they live in a std::vector), and it just feels like Numpy should be able to take a pointer to the beginning of that data alongside the number of values it contains, and read it directly.

Is there a way to pass a pointer to internal_data_[0] and the value internal_data_.size() to numpy so that it can directly access or copy the data without all the Python overhead?

解决方案

So it looks like the only real solution is to base something off pybuffer.i that can copy from C++ into an existing buffer. If you add this to a SWIG include file:

%insert("python") %{

import numpy as np

%}

/*! Templated function to copy contents of a container to an allocated memory

* buffer

*/

%inline %{

//==== ADDED BY numpy.i

#include

template < typename Container_T >

void copy_to_buffer(

const Container_T& field,

typename Container_T::value_type* buffer,

typename Container_T::size_type length

)

{

// ValidateUserInput( length == field.size(),

// "Destination buffer is the wrong size" );

// put your own assertion here or BAD THINGS CAN HAPPEN

if (length == field.size()) {

std::copy( field.begin(), field.end(), buffer );

}

}

//====

%}

%define TYPEMAP_COPY_TO_BUFFER(CLASS...)

%typemap(in) (CLASS::value_type* buffer, CLASS::size_type length)

(int res = 0, Py_ssize_t size_ = 0, void *buffer_ = 0) {

res = PyObject_AsWriteBuffer($input, &buffer_, &size_);

if ( res < 0 ) {

PyErr_Clear();

%argument_fail(res, "(CLASS::value_type*, CLASS::size_type length)",

$symname, $argnum);

}

$1 = ($1_ltype) buffer_;

$2 = ($2_ltype) (size_/sizeof($*1_type));

}

%enddef

%define ADD_NUMPY_ARRAY_INTERFACE(PYVALUE, PYCLASS, CLASS...)

TYPEMAP_COPY_TO_BUFFER(CLASS)

%template(_copy_to_buffer_ ## PYCLASS) copy_to_buffer< CLASS >;

%extend CLASS {

%insert("python") %{

def __array__(self):

"""Enable access to this data as a numpy array"""

a = np.ndarray( shape=( len(self), ), dtype=PYVALUE )

_copy_to_buffer_ ## PYCLASS(self, a)

return a

%}

}

%enddef

then you can make a container "Numpy"-able with

%template(DumbVectorFloat) DumbVector;

ADD_NUMPY_ARRAY_INTERFACE(float, DumbVectorFloat, DumbVector);

Then in Python, just do:

# dvf is an instance of DumbVectorFloat

import numpy as np

my_numpy_array = np.asarray( dvf )

This has only the overhead of a single Python C++ translation call, not the N that would result from a typical length-N array.

A slightly more complete version of this code is part of my PyTRT project at github.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值