用Tensorflow Lite的兄弟们都知道,谷歌推出的这个Tensorflow Lite更多的是为移动设备服务,提供了一个可实现在移动端设备上高速推理的一个产品,在移动端设备上的性能可远超在windows端上,而且部署方便简单,也几乎兼容所有的移动端平台
Tensorflow Lite之所以能有这种优秀的性能,得益于他的委托代理机制,可以把计算通过Delegate在GPU或者NPU上全部或者部分运行,而这种委托机制有点像插件,用户可以自己写一个Delegate,或者使用官方提供的Delegate,目前官方提供了4个Delegate,分别为:
GPU (支持OpenGL或OpenCL加速)
NPU (支持安卓原生的神经网络加速API和硬件端的自定义神经网络加速)
CoreML (针对iOS平台的优化,其本质也是使用GPU进行加速)
XnnPack (谷歌提供的一套高度优化的神经网络推理运算符库,可在全平台实现加速)
由于我不怎么用苹果系统,所以以前一直在用GPU和NPU在安卓端的加速,但最近在看谷歌的文档时发现了这个XnnPack,最让我有兴趣的是这个Delegate可以实现在windows端的加速,于是仔细翻阅了它的API接口,谷歌原文如下:
# XNNPACK backend for TensorFlow Lite
XNNPACK is a highly optimized library of neural network inference operators for
ARM, x86, and WebAssembly architectures in Android, iOS, Windows, Linux, macOS,
and Emscripten environments. This document describes how to use the XNNPACK
library as an inference engine for TensorFlow Lite.
## Using XNNPACK engine with TensorFlow Lite interpreter
XNNPACK integrates with TensorFlow Lite interpreter through the delegation
mechanism. TensorFlow Lite supports several methods to enable XNNPACK
for floating-point inference.
### Enable XNNPACK via Java API on Android (recommended on Android)
Pre-built [nightly TensorFlow Lite binaries for Android](https://www.tensorflow.org/lite/guide/android#use_the_tensorflow_lite_aar_from_mavencentral)
include XNNPACK, albeit it is disabled by default. Use the `setUseXNNPACK`
method in `Interpreter.Options` class to enable it:
```java
Interpreter.Options interpreterOptions = new Interpreter.Options();
interpreterOptions.setUseXNNPACK(true);
Interpreter interpreter = new Interpreter(model, interpreterOptions);
```
### Enable XNNPACK via Swift/Objective-C API on iOS (recommended on iOS)
Pre-built [nightly TensorFlow Lite CocoaPods](https://www.tensorflow.org/lite/guide/ios#specifying_versions)
include XNNPACK, but do not enable it by default. Swift developers can use
`InterpreterOptions` object to enable XNNPACK:
```swift
var options = InterpreterOptions()
options.isXNNPackEnabled = true
var interpreter = try Interpreter(modelPath: "model/path", options: options)
```
Objective-C developers can enable XNNPACK via a new property in the
`TFLInterpreterOptions` class:
```objc
TFLInterpreterOptions *options = [[TFLInterpreterOptions alloc] init];
options.useXNNPACK = YES;
NSError *error;
TFLInterpreter *interpreter =
[[TFLInterpreter alloc] initWithModelPath:@"model/path"
options:options
error:&error];
```
### Enable XNNPACK via Bazel build flags (recommended on desktop)
When building TensorFlow Lite with Bazel, add
`--define tflite_with_xnnpack=true`, and the TensorFlow Lite interpreter will
use XNNPACK engine by default.
The exact command depends on the target platform, e.g. for Android AAR you'd use
```
bazel build -c opt --fat_apk_cpu=x86,x86_64,arm64-v8a,armeabi-v7a \
--host_crosstool_top=@bazel_tools//tools/cpp:toolchain \
--define android_dexmerger_tool=d8_dexmerger \
--define android_incremental_dexing_tool=d8_dexbuilder \
--define tflite_with_xnnpack=true \
//tensorflow/lite/java:tensorflow-lite
```
Note that in this case `Interpreter::SetNumThreads` invocation does not take
effect on number of threads used by XNNPACK engine. In order to specify number
of threads available for XNNPACK engine you should manually pass the value when
constructing the interpreter. The snippet below illustrates this assuming you
are using `InterpreterBuilder` to construct the interpreter:
```c++
// Load model
tflite::Model* model;
...
// Construct the interprepter
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
TfLiteStatus res = tflite::InterpreterBuilder(model, resolver, num_threads);
```
**XNNPACK engine used by TensorFlow Lite interpreter uses a single thread for
inference by default.**
### Enable XNNPACK via additional dependency
Another way to enable XNNPACK is to build and link the
`//tensorflow/lite:tflite_with_xnnpack` target into your application alongside
the TensorFlow Lite framework.
This method works on platforms which support POSIX-style weak symbols (Android,
iOS, Linux, Mac, but **NOT** Windows).
### Enable XNNPACK via low-level delegate API (not recommended)
While it is possible to use low-level delegate API to enable XNNPACK, this
method is **NOT RECOMMENDED** unless you need to use TensorFlow Lite both with
and without XNNPACK (e.g. for benchmarking).
With low-level delegate API users create an XNNPACK delegate with the
`TfLiteXNNPackDelegateCreate` function, and then call
`Interpreter::ModifyGraphWithDelegate` to delegate supported parts of
the model to the XNNPACK delegate. The users must destroy the delegate with
`TfLiteXNNPackDelegateDelete` **after** releasing the TensorFlow Lite
interpreter. The snippet below illustrates the typical usage:
```c++
// Build the interpreter
std::unique_ptr<tflite::Interpreter> interpreter;
...
// IMPORTANT: initialize options with TfLiteXNNPackDelegateOptionsDefault() for
// API-compatibility with future extensions of the TfLiteXNNPackDelegateOptions
// structure.
TfLiteXNNPackDelegateOptions xnnpack_options =
TfLiteXNNPackDelegateOptionsDefault();
xnnpack_options.num_threads = num_threads;
TfLiteDelegate* xnnpack_delegate =
TfLiteXNNPackDelegateCreate(&xnnpack_options);
if (interpreter->ModifyGraphWithDelegate(xnnpack_delegate) != kTfLiteOk) {
// Report error and fall back to another delegate, or the default backend
}
// IMPORTANT: AllocateTensors can be called only AFTER ModifyGraphWithDelegate
...
// Run inference using XNNPACK
interpreter->Invoke()
...
// IMPORTANT: release the interpreter before destroying the delegate
interpreter.reset();
TfLiteXNNPackDelegateDelete(xnnpack_delegate);
```
### Using the XNNPACK weights cache
XNNPACK internally packs static weights for operations (like convolutions) in
order to make accessing weights more memory friendly. XNNPACK needs to allocate
memory internally to hold these packed weights. If you are starting multiple
TFLite interpreter instances based on the same model, there can be multiple
copies of the same packed weights in each instance. This can cause high memory
usage. The weights cache can be used to share packed weights between multiple
TFLite instances.
```c++
// Create 2 interpreters which share the same model.
std::unique_ptr<tflite::Interpreter> interpreter1;
std::unique_ptr<tflite::Interpreter> interpreter2;
// Create a weights cache that you can pass to XNNPACK delegate.
TfLiteXNNPackDelegateWeightsCache* weights_cache =
TfLiteXNNPackDelegateWeightsCacheCreate();
// Like using the low-level API above, initialize options, and pass this cache
// to XNNPACK delegate via the options.
TfLiteXNNPackDelegat

最低0.47元/天 解锁文章
929

被折叠的 条评论
为什么被折叠?



