新闻| 论坛| 博客| 在线研讨会
openvino部署目标检测
搬运大师| 2020-07-28 22:52:47 阅读:825 发布文章

安装须知
  • 确保安装成功OpenVINO,最好是2017R3之后的版本。本文使用的版本为2018R4

  • 确保安装tensorflow,本文的tensorflow为源码编译的tensorflow-gpu-1.10(编译最新版的tensorflow出错)

  • ubuntu16.0.4 目前为止OpenVINO不支持ubuntu18.0.4

下载ssdv2版本的目标检测训练压缩文件

本文下载的ssd模型地址为下载链接,其它地方下载的模型并没有测试过,在后续将测试Google Model Zone的模型是否可行。
生成xml中间文件

这里需要注意的是,Intel官方教程给的是./mo_tf.py这里应该是不对的(给mo_tf.py添加执行权限后也许可以,没有尝试,另外直接python3 mo_tf.py --input_model ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb会报错,加上input_shape也会报错目前只有使用json+pipeline.config不报错)

python3 mo_tf.py --input_model ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --output=detection_boxes,detection_scores,num_detections --tensorflow_use_custom_operations_config /home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/ssd_inception_v2_coco_2018_01_28/pipeline.config1

输出如下:

Model Optimizer arguments: Common parameters: - Path to the Input Model: /home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb - Path for generated IR: /home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/. - IR output name: frozen_inference_graph - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: detection_boxes,detection_scores,num_detections - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Offload unsupported operations: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: /home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/ssd_inception_v2_coco_2018_01_28/pipeline.config - Operations to offload: None - Patterns to offload: None - Use the config file: /home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json Model Optimizer version: 1.4.292.6ef7232d /home/amax/anaconda3/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept. [ SUCCESS ] Generated IR model. [ SUCCESS ] XML file: /home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/./frozen_inference_graph.xml [ SUCCESS ] BIN file: /home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/./frozen_inference_graph.bin [ SUCCESS ] Total execution time: 20.20 seconds. 12345678910111213141516171819202122232425262728293031323334353637

在当前目录下将生成文件:

  • frozen_inference_graph.xml

  • frozen_inference_graph.bin

  • frozen_inference_graph.mapping
    生成的下载文件可以在这里下载:

使用中间文件推理

运行脚本如下:

注意:输入图像可以是jpeg格式但是输出为bmp格式。

ssd_bin=/home/amax/inference_engine_samples_build/intel64/Release/object_detection_sample_ssd network=/home/amax/intel/computer_vision_sdk/deployment_tools/ssd_detect/frozen_inference_graph.xml ${ssd_bin} -i example.bmp -m ${network} -d CPU 123

输出结果如下:

[ INFO ] InferenceEngine: API version ............ 1.4 Build .................. 17328 Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] example.bmp [ INFO ] Loading plugin API version ............ 1.4 Build .................. lnx_20181004 Description ....... MKLDNNPlugin [ INFO ] Loading network files: /home/amax/intel/computer_vision_sdk/deployment_tools/ssd_detect/frozen_inference_graph.xml /home/amax/intel/computer_vision_sdk/deployment_tools/ssd_detect/frozen_inference_graph.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the plugin [ WARNING ] Image is resized from (640, 747) to (300, 300) [ INFO ] Batch size is 1 [ INFO ] Start inference (1 iterations) [ INFO ] Processing output blobs [0,1] element, prob = 0.912171 (28.3366,4.0617)-(640,743.86) batch id : 0 WILL BE PRINTED! [ INFO ] Image out_0.bmp created! total inference time: 28.7588 Average running time of one iteration: 28.7588 ms Throughput: 34.772 FPS [ INFO ] Execution successful1234567891011121314151617181920212223242526272829303132


*博客内容为网友个人发布,仅代表博主个人观点,如有侵权请联系工作人员删除。

参与讨论
登录后参与讨论
推荐文章
最近访客