Public | Automated Build

Last pushed: 2 months ago
Short Description
Bitnami Docker Image for TensorFlow Inception
Full Description



What is TensorFlow Inception?

The Inception model is a TensorFlow model for image recognition. You can automatically categorize image based on trained data. For more information check this link

The TensorFlow Inception docker image allows easily exporting inception data models and querying a TensorFlow server serving the Inception model. For example, it is very easy to start using the already trained data from the ImageNet image database.

https://www.tensorflow.org/tutorials/image_recognition

TL;DR;

Before running the docker image you first need to download the Inception model training checkpoint so it will be available for the TensorFlow Serving server.

$ mkdir /tmp/model-data
$ curl -o '/tmp/model-data/inception-v3-2016-03-01.tar.gz' 'http://download.tensorflow.org/models/image/imagenet/inception-v3-2016-03-01.tar.gz'
$ cd /tmp/model-data
$ tar xzf inception-v3-2016-03-01.tar.gz

Docker Compose

$ curl -sSL https://raw.githubusercontent.com/bitnami/bitnami-docker-tensorflow-inception/master/docker-compose.yml > docker-compose.yml
$ docker-compose up -d

Kubernetes

WARNING: This is a beta configuration, currently unsupported.

Get the raw URL pointing to the kubernetes.yml manifest and use kubectl to create the resources on your Kubernetes cluster like so:

$ kubectl create -f https://raw.githubusercontent.com/bitnami/bitnami-docker-tensorflow-inception/master/kubernetes.yml

Why use Bitnami Images?

  • Bitnami closely tracks upstream source changes and promptly publishes new versions of this image using our automated systems.
  • With Bitnami images the latest bug fixes and features are available as soon as possible.
  • Bitnami containers, virtual machines and cloud images use the same components and configuration approach - making it easy to switch between formats based on your project needs.
  • Bitnami images are built on CircleCI and automatically pushed to the Docker Hub.
  • All our images are based on minideb a minimalist Debian based container image which gives you a small base container image and the familiarity of a leading linux distribution.

Prerequisites

To run this application you need Docker Engine 1.10.0. Docker Compose is recommended with a version 1.6.0 or later.

How to use this image

Run TensorFlow Inception client with TensorFlow Serving

Running TensorFlow Inception client with the TensorFlow Serving server is the recommended way. You can either use docker-compose or run the containers manually.

Run the application using Docker Compose

This is the recommended way to run TensorFlow Inception client. You can use the following docker-compose.yml template:

version: '2'

services:
  tensorflow-serving:
    image: 'bitnami/tensorflow-serving:latest'
    ports:
      - '9000:9000'
    volumes:
      - 'tensorflow_serving_data:/bitnami'
      - '/tmp/model-data/:/bitnami/model-data'
  tensorflow-inception:
    image: 'bitnami/tensorflow-inception:latest'
    volumes:
      - 'tensorflow_inception_data:/bitnami'
      - '/tmp/model-data/:/bitnami/model-data'
    depends_on:
      - tensorflow-serving

volumes:
  tensorflow_serving_data:
    driver: local
  tensorflow_inception_data:
    driver: local

Run the application manually

If you want to run the application manually instead of using docker-compose, these are the basic steps you need to run:

  1. Create a new network for the application and the database:

    $ docker network create tensorflow-tier
    
  2. Start a Tensorflow Serving server in the network generated:

    $ docker run -d -v /tmp/model-data:/bitnami/model-data -p 9000:9000 --name tensorflow-serving --net tensorflow-tier bitnami/tensorflow-serving:latest
    

    Note: You need to give the container a name in order to TensorFlow Inception client to resolve the host

  3. Run the TensorFlow Inception client container:

    $ docker run -d -v /tmp/model-data:/bitnami/model-data --name tensorflow-inception --net tensorflow-tier bitnami/tensorflow-inception:latest
    

Persisting your application

If you remove the container all your data and configurations will be lost, and the next time you run the image the database will be reinitialized. To avoid this loss of data, you should mount a volume that will persist even after the container is removed.

For persistence you should mount a volume at the /bitnami path. Additionally you should mount a volume for persistence of the TensorFlow Serving configuration.

The above examples define docker volumes namely tensorflow_serving_data and tensorflow_inception_data. The ensorFlow Inception client application state will persist as long as these volumes are not removed.

To avoid inadvertent removal of these volumes you can mount host directories as data volumes. Alternatively you can make use of volume plugins to host the volume data.

Mount host directories as data volumes with Docker Compose

This requires a minor change to the docker-compose.yml template previously shown:

version: '2'

services:
  tensorflow-serving:
    image: 'bitnami/tensorflow-serving:latest'
    ports:
      - '9000:9000'
    volumes:
      - '/path/to/tensorflow-serving-persistence:/bitnami'

  tensorflow-inception:
    image: 'bitnami/tensorflow-inception:latest'
    depends_on:
      - tensorflow-serving
    volumes:
      - '/path/to/tensorflow-inception-persistence:/bitnami'

Mount host directories as data volumes using the Docker command line

  1. Create a network (if it does not exist):

    $ docker network create tensorflow-tier
    
  2. Create a Tensorflow-Serving container with host volume:

    $ docker run -d --name tensorflow-serving -p 9000:9000 \
     --net tensorflow-tier \
     --volume /path/to/tensorflow-serving-persistence:/bitnami \
     --volume /path/to/model_data:/bitnami/model-data \
     bitnami/tensorflow-serving:latest
    

    Note: You need to give the container a name in order to TensorFlow Inception client to resolve the host

  3. Create the TensorFlow Inception client container with host volumes:

    $ docker run -d --name tensorflow-inception \
     --net tensorflow-tier \
     --volume /path/to/tensorflow-inception-persistence:/bitnami \
     --volume /path/to/model_data:/bitnami/model-data \
     bitnami/tensorflow-inception:latest
    

Upgrade this application

Bitnami provides up-to-date versions of Tensorflow-Serving and TensorFlow Inception client, including security patches, soon after they are made upstream. We recommend that you follow these steps to upgrade your container. We will cover here the upgrade of the TensorFlow Inception client container. For the Tensorflow-Serving upgrade see https://github.com/bitnami/bitnami-docker-tensorflow-serving/blob/master/README.md#upgrade-this-image

  1. Get the updated images:

    $ docker pull bitnami/tensorflow-inception:latest
    
  2. Stop your container

    • For docker-compose: $ docker-compose stop tensorflow-inception
    • For manual execution: $ docker stop tensorflow-inception
  3. Take a snapshot of the application state

$ rsync -a tensorflow-inception-persistence tensorflow-inception-persistence.bkp.$(date +%Y%m%d-%H.%M.%S)

Additionally, snapshot the TensorFlow Serving data

You can use these snapshots to restore the application state should the upgrade fail.

  1. Remove the currently running container

    • For docker-compose: $ docker-compose rm tensorflow-inception
    • For manual execution: $ docker rm tensorflow-inception
  2. Run the new image

    • For docker-compose: $ docker-compose start tensorflow-inception
    • For manual execution (mount the directories if needed): docker run --name tensorflow-inception bitnami/tensorflow-inception:latest

Configuration

Environment variables

When you start the tensorflow-inception image, you can adjust the configuration of the instance by passing one or more environment variables either on the docker-compose file or on the docker run command line. If you want to add a new environment variable:

  • For docker-compose add the variable name and value under the application section:
tensorflow-inception:
  image: bitnami/tensorflow-inception:latest
  environment:
    - TENSORFLOW_INCEPTION_MODEL_INPUT_DATA_NAME=my_custom_data
  volumes_from:
    - tensorflow_inception_data
  • For manual execution add a -e option with each variable and value:

    $ docker run -d --name tensorflow-inception \
     --net tensorflow-tier \
     --volume /path/to/tensorflow-inception-persistence:/bitnami \
     bitnami/tensorflow-inception:latest
    

Available variables:

  • TENSORFLOW_SERVING_HOST: Hostname for Tensorflow-Serving server. Default: tensorflow-serving
  • TENSORFLOW_SERVING_PORT_NUMBER: Port used by Tensorflow-Serving server. Default: 9000
  • TENSORFLOW_INCEPTION_MODEL_INPUT_DATA_NAME: Folder containing the data model to export. Default: inception-v3

Contributing

We'd love for you to contribute to this container. You can request new features by creating an issue, or submit a pull request with your contribution.

Issues

If you encountered a problem running this container, you can file an issue. For us to provide better support, be sure to include the following information in your issue:

  • Host OS and version
  • Docker version ($ docker version)
  • Output of $ docker info
  • Version of this container ($ echo $BITNAMI_IMAGE_VERSION inside the container)
  • The command you used to run the container, and any relevant output you saw (masking any sensitive information)

Community

Most real time communication happens in the #containers channel at bitnami-oss.slack.com; you can sign up at slack.oss.bitnami.com.

Discussions are archived at bitnami-oss.slackarchive.io.

License

Copyright (c) 2017 Bitnami

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Docker Pull Command
Owner
bitnami

Comments (0)