配置OpenVINO的过程

配置过程中使用的文件在“openViNo配置”文件夹下

第一部分

步骤一、根据如下地址,安装要求准备硬件环境intel cpu型号和软件环境Microsoft Windows* 10 64-bit

  1. https://software.intel.com/en-us/articles/OpenVINO-Install-Windows

步骤二、安装所需的软件环境:

  1. cmake3.4
  2. anaconda-python3.7.1
  3. vs2017----在单体组件中选择—compiler :选择MSBuild
  4. 最后安装openvino toolkit

步骤三、Set the environment variables 

Option 1: Configure the Model Optimizer for all supported frameworks at the same time:

Option 2: Configure the Model Optimizer for each framework separately:

此处选择option2配置tensorflow框架(cpu版)

首先:

Cd C:\Intel\computer_vision_sdk_<version>\deployment_tools\model_optimizer\install_prerequisites

然后:

For TensorFlow:  install_prerequisites_tf.bat

再者:

  1. Open a command prompt window. 

Run the Image Classification Demo

  1. Go to the Inference Engine demo directory:

cd  C:\Intel\computer_vision_sdk_<version>\deployment_tools\demo\

运行:demo_squeezenet_download_convert_run.bat

(如果出现MSBUILD的问题,请确认.bat文件中对应的路径下是否有对应的MsBuild.exe存在)

注意:运行过程中会出现和vs2017或者2015相关的错误,请参考以下地址:

https://blog.youkuaiyun.com/qq_36556893/article/details/81391468

正常情况cmd窗口最后会出现“classification demo completed successfully”

Run the Inference Pipeline Demo

  1. cd  C:\Intel\computer_vision_sdk_<version>\deployment_tools\demo\

运行:demo_security_barrier_camera.bat

正常情况下会出现弹出一张图片,图片中一辆黑色的车和它的车牌被框住。

以上是在PC端运行的结果。

以下为在NCS2中配置和运行的过程:

Optional: Additional Installation Steps for Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2 

For Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2, the OpenVINO™ toolkit provides the Movidius™ VSC driver. To install the driver:

  1. Go to the <INSTALL_DIR>\deployment_tools\inference-engine\external\MovidiusDriver\ directory, where <INSTALL_DIR> is the directory in which the Intel Distribution of OpenVINO toolkit is installed
  2. Right click on the Movidius_VSC_Device.inf file and choose Install from the pop up menu:

(用于配置ncs的USB驱动之类的操作)

You have installed the driver for your Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2.

  1. 在神经棒上面运行两个demo

参考linux下https://software.intel.com/en-us/neural-compute-stick/get-started运行例子。

cd <INSTALL_DIR>/computer_vision_sdk/deployment_tools/demo

运行:

demo_squeezenet_download_convert_run.bat -d MYRIAD

运行:

demo_security_barrier_camera.bat -d MYRIAD

到此,PC端和NCS端官方例子跑完。

第二部分

  1. Model Optimizer Developer Guide

A summary of the steps for optimizing and deploying a trained model:

  1. Configure the Model Optimizer for your framework.
  2. Convert a trained model to produce an optimized Intermediate Representation (IR) of the model based on the trained network topology, weights, and bias values.
  3. Test the model in the Intermediate Representation format using the Inference Engine in the target environment via provided Inference Engine validation application or sample applications.
  4. Integrate the Inference Engine into your application to deploy the model in the target environment. See the Inference Engine Guide.

Model Optimizer Workflow

Model Optimizer process assumes you have a network model that was trained with a supported framework. The workflow is:

  1. Configure the Model Optimizer for the framework that was used to train the network. To perform this configuration, use the configuration bash script  for Windows* OS. The script and batch file are in: <INSTALL_DIR>/deployment_tools/model_optimizer/install_prerequisites

For Windows* OS:

      install_prerequisites.bat

       For more information about configuring the Model Optimizer, see Configuring the Model Optimizer.

  1. Provide as input a trained model that contains a specific topology and the adjusted weights and biases described in the framework-specific files.
  2. Convert the trained model to an optimized Intermediate Representation.

通过MP转换生成两个文件:.xml.bin文件

Model Optimizer produces an Intermediate Representation (IR) of the network as output. The Inference Engine reads, loads, and infers the Intermediate Representation. The Inference Engine API offers a unified API across supported Intel® platforms. Intermediate Representation is a pair of files that describe the whole model:

  • .xml: Describes the network topology
  • .bin: Contains the weights and biases binary data

2. AOISpec平台中caffe孔洞分类为例,讲解MO的转换模型过程

(1).Using the Model Optimizer to Convert Caffe* Models

Using Manual Configuration Process

cd <INSTALL_DIR>/deployment_tools/model_optimizer/

To install dependencies only for Caffe:

pip install -r requirements_caffe.txt

Using the protobuf Library on Windows* OS

On Windows, pre-built protobuf packages for Python versions 3.4, 3.5, 3.6 and 3.7 are provided with the installation package and can be found in the <INSTALL_DIR>\deployment_tools\model_optimizer\install_prerequisites folder. 

To install the protobuf package:

  1. Open the command prompt as administrator.
  2. Go to the install_prerequisites folder of the Intel Distribution of OpenVINO toolkit installation directory:

cd <INSTALL_DIR>\deployment_tools\model_optimizer\install_prerequisites

Run the following command to install the protobuf for Python 3.6. If you want to install the protobuf for Python 3.4, 3.5 or 3.7, replace protobuf-3.6.1-py3.6-win-amd64.egg with the corresponding file name from the list above.

python -m easy_install protobuf-3.6.1-py3.6-win-amd64.egg

模型准备:文件夹下convert/caffe_hole/prototxt_caffemodel包括.caffemodel,.prototxt。

(2). Optimizing Your Trained Model

cd  <INSTALL_DIR>/deployment_tools/model_optimizer

执行:python mo.py –input_model <model_dir>\convert\caffe_hole\prototxt_caffemodel\impurity.caffemodel

运行结束之后在<INSTALL_DIR>/deployment_tools/model_optimizer下会生成:.xml,.bin,.mapping三个文件

上面的操作同样用在文件夹下convert/caffe_mobile_ssd/prototxt_caffemodel/*.caffemodel

执行:python mo.py –input_model <model_dir>\convert/caffe_mobile_ssd/prototxt_caffemodel/*.caffemodel –mean_value [127.5,127.5,127.5] –input_shape [1,3,300,300]

(3).How the Model Optimizer Works

Model Optimizer loads a model into memory, reads it, builds the internal representation of the model, optimizes it, and produces the Intermediate Representation. Intermediate Representation is the only format the Inference Engine accepts.

NOTE: Model Optimizer does not infer models. Model Optimizer is an offline tool that runs before the inference takes place.

Model Optimizer has two main purposes:

  1. Produce a valid Intermediate Representation.
  2. Produce an optimized Intermediate Representation

(4). 利用官方Inference_engine_sample_2017下的源码跑通security_barrier_camera_demo(通过源码测试而非.bat脚本),该方法和配置caffe方法类似。

路径:<INSTALL_DIR>sfs04\Documents\Intel\OpenVINO\Inference_engine_sample_2107

(5).利用官方源码中segmentation_demo验证二维码分割定位中使用的caffe模型

模型和测试结果在convert\caffe_model_DM下:

    运行过程:

a.转换模型生成.xml.bin

b.修改源码,利用命令行参数

c.配置环境,将需要的dll拷贝到执行目录下。

d.运行,输入图像名字为0008_box.png,保存结果为out_0.bmp

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值