配置OpenVINO的过程

配置过程中使用的文件在“openViNo配置”文件夹下

第一部分

步骤一、根据如下地址,安装要求准备硬件环境intel cpu型号和软件环境Microsoft Windows* 10 64-bit

  1. https://software.intel.com/en-us/articles/OpenVINO-Install-Windows

步骤二、安装所需的软件环境:

  1. cmake3.4
  2. anaconda-python3.7.1
  3. vs2017----在单体组件中选择—compiler :选择MSBuild
  4. 最后安装openvino toolkit

步骤三、Set the environment variables 

Option 1: Configure the Model Optimizer for all supported frameworks at the same time:

Option 2: Configure the Model Optimizer for each framework separately:

此处选择option2配置tensorflow框架(cpu版)

首先:

Cd C:\Intel\computer_vision_sdk_<version>\deployment_tools\model_optimizer\install_prerequisites

然后:

For TensorFlow:  install_prerequisites_tf.bat

再者:

  1. Open a command prompt window. 

Run the Image Classification Demo

  1. Go to the Inference Engine demo directory:

cd  C:\Intel\computer_vision_sdk_<version>\deployment_tools\demo\

运行:demo_squeezenet_download_convert_run.bat

(如果出现MSBUILD的问题,请确认.bat文件中对应的路径下是否有对应的MsBuild.exe存在)

注意:运行过程中会出现和vs2017或者2015相关的错误,请参考以下地址:

https://blog.youkuaiyun.com/qq_36556893/article/details/81391468

正常情况cmd窗口最后会出现“classification demo completed successfully”

Run the Inference Pipeline Demo

  1. cd  C:\Intel\computer_vision_sdk_<version>\deployment_tools\demo\

运行:demo_security_barrier_camera.bat

正常情况下会出现弹出一张图片,图片中一辆黑色的车和它的车牌被框住。

以上是在PC端运行的结果。

以下为在NCS2中配置和运行的过程:

Optional: Additional Installation Steps for Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2 

For Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2, the OpenVINO™ toolkit provides the Movidius™ VSC driver. To install the driver:

  1. Go to the <INSTALL_DIR>\deployment_tools\inference-engine\external\MovidiusDriver\ directory, where <INSTALL_DIR> is the directory in which the Intel Distribution of OpenVINO toolkit is installed
  2. Right click on the Movidius_VSC_Device.inf file and choose Install from the pop up menu:

(用于配置ncs的USB驱动之类的操作)

You have installed the driver for your Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2.

  1. 在神经棒上面运行两个demo

参考linux下https://software.intel.com/en-us/neural-compute-stick/get-started运行例子。

cd <INSTALL_DIR>/computer_vision_sdk/deployment_tools/demo

运行:

demo_squeezenet_download_convert_run.bat -d MYRIAD

运行:

demo_security_barrier_camera.bat -d MYRIAD

到此,PC端和NCS端官方例子跑完。

第二部分

  1. Model Optimizer Developer Guide

A summary of the steps for optimizing and deploying a trained model:

  1. Configure the Model Optimizer for your framework.
  2. Convert a trained model to produce an optimized Intermediate Representation (IR) of the model based on the trained network topology, weights, and bias values.
  3. Test the model in the Intermediate Representation format using the Inference Engine in the target environment via provided Inference Engine validation application or sample applications.
  4. Integrate the Inference Engine into your application to deploy the model in the target environment. See the Inference Engine Guide.

Model Optimizer Workflow

Model Optimizer process assumes you have a network model that was trained with a supported framework. The workflow is:

  1. Configure the Model Optimizer for the framework that was used to train the network. To perform this configuration, use the configuration bash script  for Windows* OS. The script and batch file are in: <INSTALL_DIR>/deployment_tools/model_optimizer/install_prerequisites

For Windows* OS:

      install_prerequisites.bat

       For more information about configuring the Model Optimizer, see Configuring the Model Optimizer.

  1. Provide as input a trained model that contains a specific topology and the adjusted weights and biases described in the framework-specific files.
  2. Convert the trained model to an optimized Intermediate Representation.

通过MP转换生成两个文件:.xml.bin文件

Model Optimizer produces an Intermediate Representation (IR) of the network as output. The Inference Engine reads, loads, and infers the Intermediate Representation. The Inference Engine API offers a unified API across supported Intel® platforms. Intermediate Representation is a pair of files that describe the whole model:

  • .xml: Describes the network topology
  • .bin: Contains the weights and biases binary data

2. AOISpec平台中caffe孔洞分类为例,讲解MO的转换模型过程

(1).Using the Model Optimizer to Convert Caffe* Models

Using Manual Configuration Process

cd <INSTALL_DIR>/deployment_tools/model_optimizer/

To install dependencies only for Caffe:

pip install -r requirements_caffe.txt

Using the protobuf Library on Windows* OS

On Windows, pre-built protobuf packages for Python versions 3.4, 3.5, 3.6 and 3.7 are provided with the installation package and can be found in the <INSTALL_DIR>\deployment_tools\model_optimizer\install_prerequisites folder. 

To install the protobuf package:

  1. Open the command prompt as administrator.
  2. Go to the install_prerequisites folder of the Intel Distribution of OpenVINO toolkit installation directory:

cd <INSTALL_DIR>\deployment_tools\model_optimizer\install_prerequisites

Run the following command to install the protobuf for Python 3.6. If you want to install the protobuf for Python 3.4, 3.5 or 3.7, replace protobuf-3.6.1-py3.6-win-amd64.egg with the corresponding file name from the list above.

python -m easy_install protobuf-3.6.1-py3.6-win-amd64.egg

模型准备:文件夹下convert/caffe_hole/prototxt_caffemodel包括.caffemodel,.prototxt。

(2). Optimizing Your Trained Model

cd  <INSTALL_DIR>/deployment_tools/model_optimizer

执行:python mo.py –input_model <model_dir>\convert\caffe_hole\prototxt_caffemodel\impurity.caffemodel

运行结束之后在<INSTALL_DIR>/deployment_tools/model_optimizer下会生成:.xml,.bin,.mapping三个文件

上面的操作同样用在文件夹下convert/caffe_mobile_ssd/prototxt_caffemodel/*.caffemodel

执行:python mo.py –input_model <model_dir>\convert/caffe_mobile_ssd/prototxt_caffemodel/*.caffemodel –mean_value [127.5,127.5,127.5] –input_shape [1,3,300,300]

(3).How the Model Optimizer Works

Model Optimizer loads a model into memory, reads it, builds the internal representation of the model, optimizes it, and produces the Intermediate Representation. Intermediate Representation is the only format the Inference Engine accepts.

NOTE: Model Optimizer does not infer models. Model Optimizer is an offline tool that runs before the inference takes place.

Model Optimizer has two main purposes:

  1. Produce a valid Intermediate Representation.
  2. Produce an optimized Intermediate Representation

(4). 利用官方Inference_engine_sample_2017下的源码跑通security_barrier_camera_demo(通过源码测试而非.bat脚本),该方法和配置caffe方法类似。

路径:<INSTALL_DIR>sfs04\Documents\Intel\OpenVINO\Inference_engine_sample_2107

(5).利用官方源码中segmentation_demo验证二维码分割定位中使用的caffe模型

模型和测试结果在convert\caffe_model_DM下:

    运行过程:

a.转换模型生成.xml.bin

b.修改源码,利用命令行参数

c.配置环境,将需要的dll拷贝到执行目录下。

d.运行,输入图像名字为0008_box.png,保存结果为out_0.bmp

### 如何在 Visual Studio 中配置 OpenVINO 环境 #### 安装 Dev Tools 和 Runtime 为了成功配置 OpenVINO,在安装过程中需要完成两个主要部分:Dev Tools 的安装以及 Runtime 的设置。这一步骤可以通过官方提供的安装程序来实现,确保选择了完整的工具链选项[^1]。 #### 创建项目并配置属性表 当使用 Visual Studio 2019 来配置 OpenVINO 开发环境时,可以按照以下方式操作。首先创建一个新的 C++ 控制台应用程序项目,并将其命名为 `openVINO2021_demo` 或其他自定义名称。如果处于 Debug 模式下,则需通过右键点击解决方案资源管理器中的项目名,在弹出菜单中选择“属性”,随后添加新的项目属性表以支持调试功能[^2]。 #### 设置包含目录 对于无论是 Debug 还是 Release 构建模式下的 Include 路径设定保持一致即可满足需求。具体做法是在项目的属性窗口里找到 VC++ 目录项下的 “Include Directories” 字段,向其中加入如下路径作为头文件检索位置: ```plaintext E:\OpenVINO\openvino_<version>\deployment_tools\inference_engine\include ``` 这里 `<version>` 应替换为你实际使用的版本号,比如 `2019.3.334` 对应于较早的一个发行版情况[^3]。 #### 添加库目录与链接依赖 除了指定 include 文件夹外,还需要告知编译系统到哪里寻找预构建好的 .lib 文件用于静态或动态连接目的。同样地,在相同的属性对话框界面内定位至 Library Directory 下面追加相应的 lib 子目录地址;与此同时也要记得把必要的运行时刻库名字填入 Linker 输入字段之中以便顺利完成最终可执行体生成过程。 以下是典型情况下可能涉及的一些关键参数调整实例展示(假设采用的是 Windows 平台上基于 MSVC 工具集的工作流): - **Library Directories**: ```plaintext E:\OpenVINO\openvino_<version>\deployment_tools\inference_engine\external\tbb\lib; E:\OpenVINO\openvino_<version>\deployment_tools\inference_engine\lib\intel64; ``` - **Additional Dependencies** (for linker): ```plaintext inference_engine_legacy.lib;inference_engine_transformations.lib;tbb.lib; ``` 以上各条目均依据实际情况作出适当修改后应用之。 #### 编写测试代码验证环境有效性 最后编写一段简单的推理引擎初始化逻辑片段用来确认整个集成流程无误。下面给出了一种基本形式供参考: ```cpp #include <ie_core.hpp> #include <iostream> int main() { try { InferenceEngine::Core ie; std::cout << "Available devices: " << ie.GetMetric(Common::DEVICE_CPU, METRIC_KEY(AVAILABLE_DEVICES)).as<std::string>() << "\n"; } catch (const std::exception& e) { std::cerr << "Error occurred: " << e.what() << '\n'; return EXIT_FAILURE; } return EXIT_SUCCESS; } ``` 上述源码尝试列举当前可用硬件加速单元列表,若能正常打印相关信息则表明前期准备工作均已就绪。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值