tflite生成android,tensorflow lite - TFlite Android bytebuffer creation for inference - Stack Overflow...

I have a customly trained mobilenetV2 model which accepts as input a 128x101x3 array of FLOAT32.

In Android (Java), when calling the tflite model inference, the float[x][y][z] input must be converted into a bytebuffer of size 4128101*3 (4 for the float size and the rest for the image size).

The problem is that I there are many ways to make the conversion and I cannot find which is the right one. I can think to add to the bytebuffer fist all the z for each x and y, or I can add all the y for each x and for each z.

For example, let's suppose for sake of simplicity that the 3rd dimension is just a repetition, i.e. [x][y][0] == [x][y][1] == [x][y][2]. Now I can create the bytebuffer like this:

ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4 * 128 * 101 * 3);

byteBuffer.order(ByteOrder.nativeOrder());

for (int i=0; i

for(int j=0; j

byteBuffer.putFloat(myArray[i][j]); // z=0

byteBuffer.putFloat(myArray[i][j]); // z=1

byteBuffer.putFloat(myArray[i][j]); // z=2

}

}

byteBuffer.rewind();

Or I can create a bytebuffer like this:

for (int i=0; i

int [] inpShapeDim = {1, 1, myArray[0].length, 1};

TensorBuffer valInTnsrBuffer = TensorBuffer.createDynamic(imageDataType); // imageDataType = FLOAT32

valInTnsrBuffer.loadArray(myArray[i], inpShapeDim); //inpShapeDim=1x128x101x3

byteBuffer.put(valInTnsrBuffer.getBuffer());

}

int oneDeltaBufferPosition = byteBuffer.position();

for (int z=0; z<2; deltas++) {

for (int i = 0; i < oneDeltaBufferPosition; i++) {

byteBuffer.put(byteBuffer2.get(i));

}

}

byteBuffer.rewind();

Both of them are "valid" conversions, but then the inference doesn't work as expected, meaning that the recognition accuracy is not the same as in python.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值