openvino/openvino_tensorflow_ubuntu18_runtime

By openvino

Updated about 2 years ago

OpenVINO™ integration with TensorFlow runtime Docker images for Ubuntu* 18.04 LTS

Image
0

10K+

Latest tag

  • 2.2.0, latest

About OpenVINO™ integration with TensorFlow

OpenVINO™ integration with TensorFlow is designed for TensorFlow* developers who want to get started with OpenVINO™ in their inferencing applications. TensorFlow* developers can now take advantage of OpenVINO™ toolkit optimizations in TensorFlow inference applications across a wide range of Intel® compute devices by adding just two lines of code.

import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')

This product delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as:

  • Intel® CPUs
  • Intel® integrated GPUs
  • Intel® Movidius™ Vision Processing Units - referred to as VPU
  • Intel® Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL

[Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend the developers to adopt native OpenVINO™ APIs and its runtime.]

GitHub: https://github.com/openvinotoolkit/openvino_tensorflow/

Documentation: https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docs

Dockerfiles to build this image can be found at: https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docker

OpenVINO™ integration with TensorFlow Runtime Docker image for Ubuntu* 18.04 LTS

This image, tagged 2.2.0, contains all required runtime python packages, and shared libraries to support execution of a TensorFlow Python app with the OpenVINO™ backend on CPU, GPU, VPU, and VAD-M. By default, it hosts a Jupyter server with an Image Classification and an Object Detection sample that demonstrate the performance benefits of using OpenVINO™ integration with TensorFlow.

Launch the Jupyter server with CPU access:

docker run -it --rm \
	   -p 8888:8888 \
	   openvino/openvino_tensorflow_ubuntu18_runtime:2.2.0

Launch the Jupyter server with iGPU access:

docker run -it --rm \
	   -p 8888:8888 \
	   --device-cgroup-rule='c 189:* rmw' \
	   --device /dev/dri:/dev/dri \
	   openvino/openvino_tensorflow_ubuntu18_runtime:2.2.0

Launch the Jupyter server with MYRIAD access:

docker run -it --rm \
	   -p 8888:8888 \
	   --device-cgroup-rule='c 189:* rmw' \
	   -v /dev/bus/usb:/dev/bus/usb \
	   openvino/openvino_tensorflow_ubuntu18_runtime:2.2.0

Launch the Jupyter server with VAD-M access:

docker run -itu root:root --rm \
	   -p 8888:8888 \
	   --device-cgroup-rule='c 189:* rmw' \
	   --mount type=bind,source=/var/tmp,destination=/var/tmp \
	   --device /dev/ion:/dev/ion \
	   -v /dev/bus/usb:/dev/bus/usb \
	   openvino/openvino_tensorflow_ubuntu18_runtime:2.2.0

Run image with runtime target /bin/bash for container shell with CPU, iGPU, and MYRIAD device access

docker run -itu root:root --rm \
	   -p 8888:8888 \
	   --device-cgroup-rule='c 189:* rmw' \
	   --device /dev/dri:/dev/dri \
	   --mount type=bind,source=/var/tmp,destination=/var/tmp \
	   -v /dev/bus/usb:/dev/bus/usb \
	   openvino/openvino_tensorflow_ubuntu18_runtime:2.2.0 /bin/bash
Run image on Windows* OS

The image can also run on Windows* OS with OpenVINO™ backend support for CPU and iGPU.

Launch the Jupyter server with CPU access:

docker run -it --rm \
	   -p 8888:8888 \
	   openvino/openvino_tensorflow_ubuntu18_runtime:2.2.0

Launch the Jupyter server with iGPU access:

Pre-requisites :

  • Windows* 10 21H2 or Windows* 11 with WSL-2

  • Intel iGPU driver >= 30.0.100.9684

      docker run -it --rm \
      	     -p 8888:8888 \
      	     --device /dev/dxg:/dev/dxg \
      		 --volume /usr/lib/wsl:/usr/lib/wsl \
      		 openvino/openvino_tensorflow_ubuntu18_runtime:2.2.0
    

OpenVINO™ integration with TensorFlow Runtime Docker image for TF-Serving on Ubuntu* 18.04 LTS

This image, tagged 2.2.0-serving, provides out-of-the-box integration with TensorFlow models by making it easy to deploy new algorithms and experiments. The tensorflow_model_server executable in this image is built with OpenVINO™ and provides performance benefits on Intel backends including CPU, GPU, VPU, and VAD-M.

Here is an example that serves a Resnet50 model using this image and a client script that performs inference on the model using the REST API.

  1. Download Resnet50 model from TF Hub and untar its contents into the folder resnet_v2_50_classifiation/5

  2. Start serving container for the resnet50 model:

    To run on CPU backend:

     docker run -it --rm \
     	   -p 8501:8501 \
     	   -v <path to resnet_v2_50_classifiation>:/models/resnet \
     	   -e MODEL_NAME=resnet \
     	   openvino/openvino_tensorflow_ubuntu18_runtime:2.2.0-serving
    

    To run on iGPU:

     docker run -it --rm \
     	   -p 8501:8501 \
     	   --device-cgroup-rule='c 189:* rmw' \
     	   --device /dev/dri:/dev/dri \
     	   -v <path to resnet_v2_50_classifiation>:/models/resnet \
     	   -e MODEL_NAME=resnet \
     	   -e OPENVINO_TF_BACKEND=GPU \
     	   openvino/openvino_tensorflow_ubuntu18_runtime:2.2.0-serving
    

    To run on MYRIAD:

     docker run -it --rm \
     	   -p 8501:8501 \
     	   --device-cgroup-rule='c 189:* rmw' \
     	   -v /dev/bus/usb:/dev/bus/usb \
     	   -v <path to resnet_v2_50_classifiation>:/models/resnet \
     	   -e MODEL_NAME=resnet \
     	   -e OPENVINO_TF_BACKEND=MYRIAD \
     	   openvino/openvino_tensorflow_ubuntu18_runtime:2.2.0-serving
    

    To run on VAD-M:

     docker run -itu root:root --rm \
     	   -p 8501:8501 \
     	   --device-cgroup-rule='c 189:* rmw' \
     	   -v /dev/bus/usb:/dev/bus/usb \
     	   --mount type=bind,source=/var/tmp,destination=/var/tmp \
     	   --device /dev/ion:/dev/ion \
     	   -v <path to resnet_v2_50_classifiation>:/models/resnet \
     	   -e MODEL_NAME=resnet \
     	   -e OPENVINO_TF_BACKEND=VAD-M \
     	   openvino/openvino_tensorflow_ubuntu18_runtime:2.2.0-serving
    
  3. Run the script to send inference request from client and get predictions from server.

     wget https://raw.githubusercontent.com/tensorflow/serving/master/tensorflow_serving/example/resnet_client.py
     python resnet_client.py
    

All related environmental variables that applies during the execution of OpenVINO™ integration with TensorFlow is applicable while running through containers also. For example, to disable OpenVINO™ integration with TensorFlow while starting a TensorFlow Serving container, simply provide OPENVINO_TF_DISABLE=1 as one of the environmental variables of the docker run command. See USAGE.md for more such environmental variables.

	docker run -it --rm \
		   -p 8501:8501 \
		   -v <path to resnet_v2_50_classifiation>:/models/resnet \
		   -e MODEL_NAME=resnet \
		   -e OPENVINO_TF_DISABLE=1 \
		   openvino/openvino_tensorflow_ubuntu20_runtime:2.2.0-serving

Licenses

Copyright © 2022 Intel Corporation

These images of OpenVINO™ integration with TensorFlow are licensed under Apache License Version 2.0.

Components:


* Other names and brands may be claimed as the property of others.

Docker Pull Command

docker pull openvino/openvino_tensorflow_ubuntu18_runtime