aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src/mobile
diff options
context:
space:
mode:
authorGravatar Billy Lamberta <blamb@google.com>2018-03-29 13:28:05 -0700
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2018-03-29 13:33:32 -0700
commit0390fbec15f3d99c3badce3d666893ff124f7846 (patch)
treeae0439d5e580e26fc6e4c5f3a07f928f212358d6 /tensorflow/docs_src/mobile
parent405efdd47c20919e5a05c86b0ae2e6c8c150e534 (diff)
Docs: Move TFLite docs into tensorflow.org
PiperOrigin-RevId: 190977057
Diffstat (limited to 'tensorflow/docs_src/mobile')
-rw-r--r--tensorflow/docs_src/mobile/leftnav_files1
-rw-r--r--tensorflow/docs_src/mobile/tflite/demo_android.md156
-rw-r--r--tensorflow/docs_src/mobile/tflite/demo_ios.md2
-rw-r--r--tensorflow/docs_src/mobile/tflite/devguide.md224
-rw-r--r--tensorflow/docs_src/mobile/tflite/index.md4
5 files changed, 358 insertions, 29 deletions
diff --git a/tensorflow/docs_src/mobile/leftnav_files b/tensorflow/docs_src/mobile/leftnav_files
index 4cf134cc3c..585470d5f0 100644
--- a/tensorflow/docs_src/mobile/leftnav_files
+++ b/tensorflow/docs_src/mobile/leftnav_files
@@ -1,6 +1,7 @@
index.md
### TensorFlow Lite
tflite/index.md
+tflite/devguide.md
tflite/demo_android.md
tflite/demo_ios.md
>>>
diff --git a/tensorflow/docs_src/mobile/tflite/demo_android.md b/tensorflow/docs_src/mobile/tflite/demo_android.md
index c94b5597a6..7f2f8882a2 100644
--- a/tensorflow/docs_src/mobile/tflite/demo_android.md
+++ b/tensorflow/docs_src/mobile/tflite/demo_android.md
@@ -1,42 +1,144 @@
-# TensorFlow Lite Demo for Android
+# Android Demo App
-The TensorFlow Lite demo is a camera app that continuously classifies whatever
-it sees from your device's back camera, using a quantized MobileNet model.
+An example Android application using TensorFLow Lite is available
+[on GitHub](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo/app).
+The demo is a sample camera app that classifies images continuously
+using either a quantized Mobilenet model or a floating point Inception-v3 model.
+To run the demo, a device running Android 5.0 ( API 21) or higher is required.
-You'll need an Android device running Android 5.0 or higher to run the demo.
+In the demo app, inference is done using the TensorFlow Lite Java API. The demo
+app classifies frames in real-time, displaying the top most probable
+classifications. It also displays the time taken to detect the object.
-To get you started working with TensorFlow Lite on Android, we'll walk you
-through building and deploying our TensorFlow demo app in Android Studio.
+There are three ways to get the demo app to your device:
-Note: For a more detailed guide see the
-[TFLite Codelab](https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/index.html#0)
+* Download the [prebuilt binary APK](http://download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk).
+* Use Android Studio to build the application.
+* Download the source code for TensorFlow Lite and the demo and build it using
+ bazel.
-It's also possible to build the demo app with Bazel, but we only recommend
-this for advanced users who are very familiar with the Bazel build
-environment. For more information on that, see our page [on Github](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite#building-tensorflow-lite-and-the-demo-app-from-source).
-## Build and deploy with Android Studio
+## Download the pre-built binary
-1. Clone the TensorFlow repository from GitHub if you haven't already:
+The easiest way to try the demo is to download the
+[pre-built binary APK](https://storage.googleapis.com/download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk)
- git clone https://github.com/tensorflow/tensorflow
+Once the APK is installed, click the app icon to start the program. The first
+time the app is opened, it asks for runtime permissions to access the device
+camera. The demo app opens the back-camera of the device and recognizes objects
+in the camera's field of view. At the bottom of the image (or at the left
+of the image if the device is in landscape mode), it displays top three objects
+classified and the classification latency.
-2. Install the latest version of Android Studio from [here](https://developer.android.com/studio/index.html).
-3. From the **Welcome to Android Studio** screen, use the **Import Project
- (Gradle, Eclipse ADT, etc)** option to import the
- `tensorflow/contrib/lite/java/demo` directory as an existing Android Studio
- Project.
+## Build in Android Studio with TensorFlow Lite AAR from JCenter
- Android Studio may prompt you to install Gradle upgrades and other tool
- versions; you should accept these upgrades.
+Use Android Studio to try out changes in the project code and compile the demo
+app:
-4. Download the TensorFlow Lite MobileNet model from [here](https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_v1_224_android_quant_2017_11_08.zip).
+* Install the latest version of
+ [Android Studio](https://developer.android.com/studio/index.html).
+* Make sure the Android SDK version is greater than 26 and NDK version is greater
+ than 14 (in the Android Studio settings).
+* Import the `tensorflow/contrib/lite/java/demo` directory as a new
+ Android Studio project.
+* Install all the Gradle extensions it requests.
- Unzip this and copy the `mobilenet_quant_v1_224.tflite` file to the assets
- directory: `tensorflow/contrib/lite/java/demo/app/src/main/assets/`
+To get a model, either:
-5. Build and run the app in Android Studio.
+* Download the quantized [Mobilenet TensorFlow Lite model](https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_v1_224_android_quant_2017_11_08.zip)
+ and unzip and copy `mobilenet_quant_v1_224.tflite` to the assets directory:
+ `tensorflow/contrib/lite/java/demo/app/src/main/assets/`.
+* Or, download the floating point [Inception-v3 model](https://storage.googleapis.com/download.tensorflow.org/models/tflite/inception_v3_slim_2016_android_2017_11_10.zip)
+ and unzip and copy `inceptionv3_non_slim_2015.tflite` to the assets
+ directory. Change the chosen classifier in
+ [Camera2BasicFragment.java](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/java/demo/app/src/main/java/com/example/android/tflitecamerademo/Camera2BasicFragment.java)<br>
+ from: `classifier = new ImageClassifierQuantizedMobileNet(getActivity());`<br>
+ to: `classifier = new ImageClassifierFloatInception(getActivity());`.
-You'll have to grant permissions for the app to use the device's camera. Point
-the camera at various objects and enjoy seeing how the model classifies things!
+Now you can build and run the demo app.
+
+
+## Build TensorFlow Lite and the demo app from source
+
+### Clone the TensorFlow repo
+
+```sh
+git clone https://github.com/tensorflow/tensorflow
+```
+
+### Install Bazel
+
+If `bazel` is not installed on your system, see
+[Installing Bazel](https://bazel.build/versions/master/docs/install.html).
+
+Note: Bazel does not currently support Android builds on Windows. Windows users
+should download the
+[prebuilt binary](https://storage.googleapis.com/download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk).
+
+### Install Android NDK and SDK
+
+The Android NDK is required to build the native (C/C++) TensorFlow Lite code. The
+current recommended version is *14b* and can be found on the
+[NDK Archives](https://developer.android.com/ndk/downloads/older_releases.html#ndk-14b-downloads)
+page.
+
+The Android SDK and build tools can be
+[downloaded separately](https://developer.android.com/tools/revisions/build-tools.html)
+or used as part of
+[Android Studio](https://developer.android.com/studio/index.html). To build the
+TensorFlow Lite Android demo, build tools require API >= 23 (but it will run on
+devices with API >= 21).
+
+In the root of the TensorFlow repository, update the `WORKSPACE` file with the
+`api_level` and location of the SDK and NDK. If you installed it with
+Android Studio, the SDK path can be found in the SDK manager. The default NDK
+path is:`{SDK path}/ndk-bundle.` For example:
+
+```
+android_sdk_repository (
+ name = "androidsdk",
+ api_level = 23,
+ build_tools_version = "23.0.2",
+ path = "/home/xxxx/android-sdk-linux/",
+)
+
+android_ndk_repository(
+ name = "androidndk",
+ path = "/home/xxxx/android-ndk-r10e/",
+ api_level = 19,
+)
+```
+
+Some additional details are available on the
+[TF Lite Android App page](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo/README.md).
+
+### Build the source code
+
+To build the demo app, run `bazel`:
+
+```
+bazel build --cxxopt=--std=c++11 //tensorflow/contrib/lite/java/demo/app/src/main:TfLiteCameraDemo
+```
+
+Caution: Because of an bazel bug, we only support building the Android demo app
+within a Python 2 environment.
+
+
+## About the demo
+
+The demo app is resizing each camera image frame (224 width * 224 height) to
+match the quantized MobileNets model (299 * 299 for Inception-v3). The resized
+image is converted—row by row—into a
+[ByteBuffer](https://developer.android.com/reference/java/nio/ByteBuffer.html).
+Its size is 1 * 224 * 224 * 3 bytes, where 1 is the number of images in a batch.
+224 * 224 (299 * 299) is the width and height of the image. 3 bytes represents
+the 3 colors of a pixel.
+
+This demo uses the TensorFlow Lite Java inference API
+for models which take a single input and provide a single output. This outputs a
+two-dimensional array, with the first dimension being the category index and the
+second dimension being the confidence of classification. Both models have 1001
+unique categories and the app sorts the probabilities of all the categories and
+displays the top three. The model file must be downloaded and bundled within the
+assets directory of the app.
diff --git a/tensorflow/docs_src/mobile/tflite/demo_ios.md b/tensorflow/docs_src/mobile/tflite/demo_ios.md
index 3ee9b1cbca..3be21da89f 100644
--- a/tensorflow/docs_src/mobile/tflite/demo_ios.md
+++ b/tensorflow/docs_src/mobile/tflite/demo_ios.md
@@ -1,4 +1,4 @@
-# TensorFlow Lite Demo for iOS
+# iOS Demo App
The TensorFlow Lite demo is a camera app that continuously classifies whatever
it sees from your device's back camera, using a quantized MobileNet model. These
diff --git a/tensorflow/docs_src/mobile/tflite/devguide.md b/tensorflow/docs_src/mobile/tflite/devguide.md
new file mode 100644
index 0000000000..5b521dca7b
--- /dev/null
+++ b/tensorflow/docs_src/mobile/tflite/devguide.md
@@ -0,0 +1,224 @@
+# Developer Guide
+
+Using a TensorFlow Lite model in your mobile app requires multiple
+considerations: you must choose a pre-trained or custom model, convert the model
+to a TensorFLow Lite format, and finally, integrate the model in your app.
+
+## 1. Choose a model
+
+Depending on the use case, you can choose one of the popular open-sourced models,
+such as *InceptionV3* or *MobileNets*, and re-train these models with a custom
+data set or even build your own custom model.
+
+### Use a pre-trained model
+
+[MobileNets](https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html)
+is a family of mobile-first computer vision models for TensorFlow designed to
+effectively maximize accuracy, while taking into consideration the restricted
+resources for on-device or embedded applications. MobileNets are small,
+low-latency, low-power models parameterized to meet the resource constraints for
+a variety of uses. They can be used for classification, detection, embeddings, and
+segmentation—similar to other popular large scale models, such as
+[Inception](https://arxiv.org/pdf/1602.07261.pdf). Google provides 16 pre-trained
+[ImageNet](http://www.image-net.org/challenges/LSVRC/) classification checkpoints
+for MobileNets that can be used in mobile projects of all sizes.
+
+[Inception-v3](https://arxiv.org/abs/1512.00567) is an image recognition model
+that achieves fairly high accuracy recognizing general objects with 1000 classes,
+for example, "Zebra", "Dalmatian", and "Dishwasher". The model extracts general
+features from input images using a convolutional neural network and classifies
+them based on those features with fully-connected and softmax layers.
+
+[On Device Smart Reply](https://research.googleblog.com/2017/02/on-device-machine-intelligence.html)
+is an on-device model that provides one-touch replies for incoming text messages
+by suggesting contextually relevant messages. The model is built specifically for
+memory constrained devices, such as watches and phones, and has been successfully
+used in Smart Replies on Android Wear. Currently, this model is Android-specific.
+
+These pre-trained models are [available for download](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/g3doc/models.md)
+
+### Re-train Inception-V3 or MobileNet for a custom data set
+
+These pre-trained models were trained on the *ImageNet* data set which contains
+1000 predefined classes. If these classes are not sufficient for your use case,
+the model will need to be re-trained. This technique is called
+*transfer learning* and starts with a model that has been already trained on a
+problem, then retrains the model on a similar problem. Deep learning from
+scratch can take days, but transfer learning is fairly quick. In order to do
+this, you need to generate a custom data set labeled with the relevant classes.
+
+The [TensorFlow for Poets](https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/)
+codelab walks through the re-training process step-by-step. The code supports
+both floating point and quantized inference.
+
+### Train a custom model
+
+A developer may choose to train a custom model using Tensorflow (see the
+@{$tutorials} for examples of building and training models). If you have already
+written a model, the first step is to export this to a @{tf.GraphDef} file. This
+is required because some formats do not store the model structure outside the
+code, and we must communicate with other parts of the framework. See
+[Exporting the Inference Graph](https://github.com/tensorflow/models/blob/master/research/slim/README.md)
+to create .pb file for the custom model.
+
+TensorFlow Lite currently supports a subset of TensorFlow operators. Refer to the
+[TensorFlow Lite & TensorFlow Compatibility Guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/g3doc/tf_ops_compatibility.md)
+for supported operators and their usage. This set of operators will continue to
+grow in future Tensorflow Lite releases.
+
+
+## 2. Convert the model format
+
+The model generated (or downloaded) in the previous step is a *standard*
+Tensorflow model and you should now have a .pb or .pbtxt @{tf.GraphDef} file.
+Models generated with transfer learning (re-training) or custom models must be
+converted—but, we must first freeze the graph to convert the model to the
+Tensorflow Lite format. This process uses several model formats:
+
+* @{tf.GraphDef} (.pb) —A protobuf that represents the TensorFlow training or
+ computation graph. It contains operators, tensors, and variables definitions.
+* *CheckPoint* (.ckpt) —Serialized variables from a TensorFlow graph. Since this
+ does not contain a graph structure, it cannot be interpreted by itself.
+* `FrozenGraphDef` —A subclass of `GraphDef` that does not contain
+ variables. A `GraphDef` can be converted to a `FrozenGraphDef` by taking a
+ CheckPoint and a `GraphDef`, and converting each variable into a constant
+ using the value retrieved from the CheckPoint.
+* `SavedModel` —A `GraphDef` and CheckPoint with a signature that labels
+ input and output arguments to a model. A `GraphDef` and CheckPoint can be
+ extracted from a `SavedModel`.
+* *TensorFlow Lite model* (.tflite) —A serialized
+ [FlatBuffer](https://google.github.io/flatbuffers/) that contains TensorFlow
+ Lite operators and tensors for the TensorFlow Lite interpreter, similiar to a
+ `FrozenGraphDef`.
+
+### Freeze Graph
+
+To use the `GraphDef` .pb file with TensorFlow Lite, you must have checkpoints
+that contain trained weight parameters. The .pb file only contains the structure
+of the graph. The process of merging the checkpoint values with the graph
+structure is called *freezing the graph*.
+
+You should have a checkpoints folder or download them for a pre-trained model
+(for example,
+[MobileNets](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md)).
+
+To freeze the graph, use the following command (changing the arguments):
+
+```
+freeze_graph --input_graph=/tmp/mobilenet_v1_224.pb \
+ --input_checkpoint=/tmp/checkpoints/mobilenet-10202.ckpt \
+ --input_binary=true \
+ --output_graph=/tmp/frozen_mobilenet_v1_224.pb \
+ --output_node_names=MobileNetV1/Predictions/Reshape_1
+```
+
+The `input_binary` flag must be enabled so the protobuf is read and written in
+a binary format. Set the `input_graph` and `input_checkpoint` files.
+
+The `output_node_names` may not be obvious outside of the code that built the
+model. The easiest way to find them is to visualize the graph, either with
+[TensorBoard](https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2/#3)
+or `graphviz`.
+
+The frozen `GraphDef` is now ready for conversion to the `FlatBuffer` format
+(.tflite) for use on Android or iOS devices. For Android, the Tensorflow
+Optimizing Converter tool supports both float and quantized models. To convert
+the frozen `GraphDef` to the .tflite format:
+
+```
+toco --input_file=$(pwd)/mobilenet_v1_1.0_224/frozen_graph.pb \
+ --input_format=TENSORFLOW_GRAPHDEF \
+ --output_format=TFLITE \
+ --output_file=/tmp/mobilenet_v1_1.0_224.tflite \
+ --inference_type=FLOAT \
+ --input_type=FLOAT \
+ --input_arrays=input \
+ --output_arrays=MobilenetV1/Predictions/Reshape_1 \
+ --input_shapes=1,224,224,3
+```
+
+The `input_file` argument should reference the frozen `GraphDef` file
+containing the model architecture. The [frozen_graph.pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_1.0_224_frozen.tgz)
+file used here is available for download. `output_file` is where the TensorFlow
+Lite model will get generated. The `input_type` and `inference_type`
+arguments should be set to `FLOAT`, unless converting a
+@{$performance/quantization$quantized model}. Setting the `input_array`,
+`output_array`, and `input_shape` arguments are not as straightforward. The
+easiest way to find these values is to explore the graph using Tensorboard. Reuse
+the arguments for specifying the output nodes for inference in the
+`freeze_graph` step.
+
+It is also possible to use the Tensorflow Optimizing Converter with protobufs
+from either Python or from the command line (see the
+[toco_from_protos.py](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/toco/python/toco_from_protos.py)
+example). This allows you to integrate the conversion step into the model design
+workflow, ensuring the model is easily convertible to a mobile inference graph.
+For example:
+
+```python
+import tensorflow as tf
+
+img = tf.placeholder(name="img", dtype=tf.float32, shape=(1, 64, 64, 3))
+val = img + tf.constant([1., 2., 3.]) + tf.constant([1., 4., 4.])
+out = tf.identity(val, name="out")
+
+with tf.Session() as sess:
+ tflite_model = tf.contrib.lite.toco_convert(sess.graph_def, [img], [out])
+ open("converteds_model.tflite", "wb").write(tflite_model)
+```
+
+For usage, see the Tensorflow Optimizing Converter
+[command-line examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md).
+
+Refer to the
+[Ops compatibility guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/g3doc/tf_ops_compatibility.md)
+for troubleshooting help, and if that doesn't help, please
+[file an issue](https://github.com/tensorflow/tensorflow/issues).
+
+The [development repo](https://github.com/tensorflow/tensorflow) contains a tool
+to visualize TensorFlow Lite models after conversion. To build the
+[visualize.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/tools/visualize.py)
+tool:
+
+```sh
+bazel run tensorflow/contrib/lite/tools:visualize -- model.tflite model_viz.html
+```
+
+This generates an interactive HTML page listing subgraphs, operations, and a
+graph visualization.
+
+
+## 3. Use the TensorFlow Lite model for inference in a mobile app
+
+After completing the prior steps, you should now have a .tflite model file.
+
+### Android
+
+Since Android apps are written in Java and the core TensorFlow library is in C++,
+a JNI library is provided as an interface. This is only meant for inference—it
+provides the ability to load a graph, set up inputs, and run the model to
+calculate outputs.
+
+The open source Android demo app uses the JNI interface and is available
+[on GitHub](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo/app).
+You can also download a
+[prebuilt APK](http://download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk).
+See the @{$tflite/demo_android} guide for details.
+
+The @{$mobile/android_build} guide has instructions for installing TensorFlow on
+Android and setting up `bazel` and Android Studio.
+
+### iOS
+
+To integrate a TensorFlow model in an iOS app, see the
+[TensorFlow Lite for iOS](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/g3doc/ios.md)
+guide and @{$tflite/demo_ios} guide.
+
+#### Core ML support
+
+Core ML is a machine learning framework used in Apple products. In addition to
+using Tensorflow Lite models directly in your applications, you can convert
+trained Tensorflow models to the
+[CoreML](https://developer.apple.com/machine-learning/) format for use on Apple
+devices. To use the converter, refer to the
+[Tensorflow-CoreML converter documentation](https://github.com/tf-coreml/tf-coreml).
diff --git a/tensorflow/docs_src/mobile/tflite/index.md b/tensorflow/docs_src/mobile/tflite/index.md
index beb24794fc..11f11ea4dc 100644
--- a/tensorflow/docs_src/mobile/tflite/index.md
+++ b/tensorflow/docs_src/mobile/tflite/index.md
@@ -155,7 +155,9 @@ retraining for both floating point and quantized inference.
The following diagram shows the architectural design of TensorFlow Lite:
-![tensorflow lite architecture](https://www.tensorflow.org/images/tflite-architecture.jpg)
+<img src="/images/tflite-architecture.jpg"
+ alt="TensorFlow Lite architecture diagram"
+ style="max-width:600px;">
Starting with a trained TensorFlow model on disk, you'll convert that model to
the TensorFlow Lite file format (`.tflite`) using the TensorFlow Lite