aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar Nupur Garg <nupurgarg@google.com>2018-10-09 11:03:57 -0700
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2018-10-09 11:08:47 -0700
commit1e4a3baad388b5d5250efdb19f91d5b670816fbe (patch)
tree278f4cc3bd9e9f50bc26d288a6b2851a4ae9858b
parent3e8af7ea6b70104b05be22797451d0218c9e5262 (diff)
Update TFLite Converter documentation.
PiperOrigin-RevId: 216386450
-rw-r--r--tensorflow/contrib/lite/toco/README.md9
-rw-r--r--tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md66
-rw-r--r--tensorflow/contrib/lite/toco/g3doc/cmdline_reference.md8
-rw-r--r--tensorflow/contrib/lite/toco/g3doc/python_api.md95
4 files changed, 93 insertions, 85 deletions
diff --git a/tensorflow/contrib/lite/toco/README.md b/tensorflow/contrib/lite/toco/README.md
index 2db6a627ab..91f6f618a3 100644
--- a/tensorflow/contrib/lite/toco/README.md
+++ b/tensorflow/contrib/lite/toco/README.md
@@ -1,6 +1,6 @@
-# TOCO: TensorFlow Lite Optimizing Converter
+# TensorFlow Lite Converter
-The TensorFlow Lite Optimizing Converter converts TensorFlow graphs into
+The TensorFlow Lite Converter converts TensorFlow graphs into
TensorFlow Lite graphs. There are additional usages that are also detailed in
the usage documentation.
@@ -14,9 +14,10 @@ Usage information is given in these documents:
## Where the converter fits in the TensorFlow landscape
-Once an application developer has a trained TensorFlow model, TOCO will accept
+Once an application developer has a trained TensorFlow model, the TensorFlow
+Lite Converter will accept
that model and generate a TensorFlow Lite
-[FlatBuffer](https://google.github.io/flatbuffers/) file. TOCO currently supports
+[FlatBuffer](https://google.github.io/flatbuffers/) file. The converter currently supports
[SavedModels](https://www.tensorflow.org/guide/saved_model#using_savedmodel_with_estimators),
frozen graphs (models generated via
[freeze_graph.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py)),
diff --git a/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md b/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md
index aba7536cbd..e3c46eb377 100644
--- a/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md
+++ b/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md
@@ -1,7 +1,7 @@
-# TensorFlow Lite Optimizing Converter command-line examples
+# TensorFlow Lite Converter command-line examples
-This page provides examples on how to use TOCO via command line. It is
-complemented by the following documents:
+This page shows how to use the TensorFlow Lite Converter in the command line. It
+is complemented by the following documents:
* [README](../README.md)
* [Command-line glossary](cmdline_reference.md)
@@ -10,7 +10,7 @@ complemented by the following documents:
Table of contents:
* [Command-line tools](#tools)
- * [Converting models prior to TensorFlow 1.9.](#pre-tensorflow-1.9)
+ * [Converting models prior to TensorFlow 1.9](#pre-tensorflow-1.9)
* [Basic examples](#basic)
* [Convert a TensorFlow GraphDef](#graphdef)
* [Convert a TensorFlow SavedModel](#savedmodel)
@@ -31,27 +31,28 @@ Table of contents:
## Command-line tools <a name="tools"></a>
-There are two approaches to running TOCO via command line.
+There are two approaches to running the converter in the command line.
* `tflite_convert`: Starting from TensorFlow 1.9, the command-line tool
- `tflite_convert` will be installed as part of the Python package. All of the
+ `tflite_convert` is installed as part of the Python package. All of the
examples below use `tflite_convert` for simplicity.
* Example: `tflite_convert --output_file=...`
-* `bazel`: In order to run the latest version of TOCO, [clone the TensorFlow
- repository](https://www.tensorflow.org/install/source)
- and use `bazel`. This is the recommended approach for converting models that
- utilize new features that were not supported by TOCO in TensorFlow 1.9.
+* `bazel`: In order to run the latest version of the TensorFlow Lite Converter
+ either install the nightly build using
+ [pip](https://www.tensorflow.org/install/pip) or
+ [clone the TensorFlow repository](https://www.tensorflow.org/install/source)
+ and use `bazel`.
* Example: `bazel run
//tensorflow/contrib/lite/python:tflite_convert --
--output_file=...`
-### Converting models prior to TensorFlow 1.9. <a name="pre-tensorflow-1.9"></a>
+### Converting models prior to TensorFlow 1.9 <a name="pre-tensorflow-1.9"></a>
-The recommended approach for using TOCO prior to TensorFlow 1.9 is the [Python
-API](python_api.md#pre-tensorflow-1.9). If a command line tool is desired, the
-`toco` command line tool was available in TensorFlow 1.7. Enter `toco --help` in
-Terminal for additional details on the command-line flags available. There were
-no command line tools in TensorFlow 1.8.
+The recommended approach for using the converter prior to TensorFlow 1.9 is the
+[Python API](python_api.md#pre-tensorflow-1.9). If a command line tool is
+desired, the `toco` command line tool was available in TensorFlow 1.7. Enter
+`toco --help` in Terminal for additional details on the command-line flags
+available. There were no command line tools in TensorFlow 1.8.
## Basic examples <a name="basic"></a>
@@ -117,9 +118,9 @@ tflite_convert \
### Convert a TensorFlow GraphDef for quantized inference <a name="graphdef-quant"></a>
-TOCO is compatible with fixed point quantization models described
-[here](https://www.tensorflow.org/performance/quantization). These are float
-models with
+The TensorFlow Lite Converter is compatible with fixed point quantization models
+described [here](https://www.tensorflow.org/performance/quantization). These are
+float models with
[`FakeQuant*`](https://www.tensorflow.org/api_guides/python/array_ops#Fake_quantization)
ops inserted at the boundaries of fused layers to record min-max range
information. This generates a quantized inference workload that reproduces the
@@ -141,12 +142,12 @@ tflite_convert \
### Use \"dummy-quantization\" to try out quantized inference on a float graph <a name="dummy-quant"></a>
-In order to evaluate the possible benefit of generating a quantized graph, TOCO
-allows "dummy-quantization" on float graphs. The flags `--default_ranges_min`
-and `--default_ranges_max` accept plausible values for the min-max ranges of the
-values in all arrays that do not have min-max information. "Dummy-quantization"
-will produce lower accuracy but will emulate the performance of a correctly
-quantized model.
+In order to evaluate the possible benefit of generating a quantized graph, the
+converter allows "dummy-quantization" on float graphs. The flags
+`--default_ranges_min` and `--default_ranges_max` accept plausible values for
+the min-max ranges of the values in all arrays that do not have min-max
+information. "Dummy-quantization" will produce lower accuracy but will emulate
+the performance of a correctly quantized model.
The example below contains a model using Relu6 activation functions. Therefore,
a reasonable guess is that most activation ranges should be contained in [0, 6].
@@ -207,10 +208,10 @@ tflite_convert \
### Specifying subgraphs
Any array in the input file can be specified as an input or output array in
-order to extract subgraphs out of an input graph file. TOCO discards the parts
-of the graph outside of the specific subgraph. Use [graph
-visualizations](#graph-visualizations) to identify the input and output arrays
-that make up the desired subgraph.
+order to extract subgraphs out of an input graph file. The TensorFlow Lite
+Converter discards the parts of the graph outside of the specific subgraph. Use
+[graph visualizations](#graph-visualizations) to identify the input and output
+arrays that make up the desired subgraph.
The follow command shows how to extract a single fused layer out of a TensorFlow
GraphDef.
@@ -247,9 +248,10 @@ function tends to get fused).
## Graph visualizations
-TOCO can export a graph to the Graphviz Dot format for easy visualization via
-either the `--output_format` flag or the `--dump_graphviz_dir` flag. The
-subsections below outline the use cases for each.
+The converter can export a graph to the Graphviz Dot format for easy
+visualization using either the `--output_format` flag or the
+`--dump_graphviz_dir` flag. The subsections below outline the use cases for
+each.
### Using `--output_format=GRAPHVIZ_DOT` <a name="using-output-format-graphviz-dot"></a>
diff --git a/tensorflow/contrib/lite/toco/g3doc/cmdline_reference.md b/tensorflow/contrib/lite/toco/g3doc/cmdline_reference.md
index 00bc8d4ccb..31200fd657 100644
--- a/tensorflow/contrib/lite/toco/g3doc/cmdline_reference.md
+++ b/tensorflow/contrib/lite/toco/g3doc/cmdline_reference.md
@@ -1,8 +1,8 @@
-# TensorFlow Lite Optimizing Converter command-line glossary
+# TensorFlow Lite Converter command-line glossary
-This page is complete reference of command-line flags used by TOCO's command
-line starting from TensorFlow 1.9 up until the most recent build of TensorFlow.
-It is complemented by the following other documents:
+This page is complete reference of command-line flags used by the TensorFlow
+Lite Converter's command line starting from TensorFlow 1.9 up until the most
+recent build of TensorFlow. It is complemented by the following other documents:
* [README](../README.md)
* [Command-line examples](cmdline_examples.md)
diff --git a/tensorflow/contrib/lite/toco/g3doc/python_api.md b/tensorflow/contrib/lite/toco/g3doc/python_api.md
index 8c31c3dca8..1f741360c6 100644
--- a/tensorflow/contrib/lite/toco/g3doc/python_api.md
+++ b/tensorflow/contrib/lite/toco/g3doc/python_api.md
@@ -1,7 +1,8 @@
-# TensorFlow Lite Optimizing Converter & Interpreter Python API reference
+# TensorFlow Lite Converter & Interpreter Python API reference
-This page provides examples on how to use TOCO and the TensorFlow Lite
-interpreter via the Python API. It is complemented by the following documents:
+This page provides examples on how to use the TensorFlow Lite Converter and the
+TensorFlow Lite interpreter using the Python API. It is complemented by the
+following documents:
* [README](../README.md)
* [Command-line examples](cmdline_examples.md)
@@ -23,39 +24,35 @@ Table of contents:
* [Using the interpreter from model data](#interpreter-data)
* [Additional instructions](#additional-instructions)
* [Build from source code](#latest-package)
- * [Converting models prior to TensorFlow 1.9.](#pre-tensorflow-1.9)
+ * [Converting models in TensorFlow 1.9 to TensorFlow 1.11](#pre-tensorflow-1.11)
+ * [Converting models prior to TensorFlow 1.9](#pre-tensorflow-1.9)
## High-level overview
-While the TensorFlow Lite Optimizing Converter can be used from the command
-line, it is often convenient to use it as part of a Python model build and
-training script. This is so that conversion can be part of your model
-development pipeline. This allows you to know early and often that you are
-designing a model that can be targeted to devices with mobile.
+While the TensorFlow Lite Converter can be used from the command line, it is
+often convenient to use in a Python script as part of the model development
+pipeline. This allows you to know early that you are designing a model that can
+be targeted to devices with mobile.
## API
The API for converting TensorFlow models to TensorFlow Lite as of TensorFlow 1.9
-is `tf.contrib.lite.TocoConverter`. The API for calling the Python intepreter is
-`tf.contrib.lite.Interpreter`.
-
-**NOTE**: As of TensorFlow 1.12, the API for converting TensorFlow models to
-TFLite will be renamed to `TFLiteConverter`. `TFLiteConverter` is semantically
-identically to `TocoConverter`. The API is available at
-`tf.contrib.lite.TFLiteConverter` as of the Sept 26 `tf-nightly`.
-
-`TocoConverter` provides class methods based on the original format of the
-model. `TocoConverter.from_session()` is available for GraphDefs.
-`TocoConverter.from_saved_model()` is available for SavedModels.
-`TocoConverter.from_keras_model_file()` is available for `tf.Keras` files.
+is `tf.contrib.lite.TFLiteConverter`. The API for calling the Python intepreter
+is `tf.contrib.lite.Interpreter`.
+
+Note: Reference "Additional Instructions" sections for converting TensorFlow
+models to TensorFlow Lite
+[in TensorFlow 1.9 to TensorFlow 1.11](#pre-tensorflow-1.11) and
+[prior to TensorFlow 1.9](#pre-tensorflow-1.9)
+
+`TFLiteConverter` provides class methods based on the original format of the
+model. `TFLiteConverter.from_session()` is available for GraphDefs.
+`TFLiteConverter.from_saved_model()` is available for SavedModels.
+`TFLiteConverter.from_keras_model_file()` is available for `tf.Keras` files.
Example usages for simple float-point models are shown in
[Basic Examples](#basic). Examples usages for more complex models is shown in
[Complex Examples](#complex).
-**NOTE**: Currently, `TocoConverter` will cause a fatal error to the Python
-interpreter when the conversion fails. This will be remedied as soon as
-possible.
-
## Basic examples <a name="basic"></a>
The following section shows examples of how to convert a basic float-point model
@@ -76,7 +73,7 @@ out = tf.identity(val, name="out")
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
- converter = tf.contrib.lite.TocoConverter.from_session(sess, [img], [out])
+ converter = tf.contrib.lite.TFLiteConverter.from_session(sess, [img], [out])
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
```
@@ -89,7 +86,7 @@ TensorFlow Lite FlatBuffer when the GraphDef is stored in a file. Both `.pb` and
The example uses
[Mobilenet_1.0_224](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_1.0_224_frozen.tgz).
-The function only supports GraphDefs frozen via
+The function only supports GraphDefs frozen using
[freeze_graph.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py).
```python
@@ -99,7 +96,7 @@ graph_def_file = "/path/to/Downloads/mobilenet_v1_1.0_224/frozen_graph.pb"
input_arrays = ["input"]
output_arrays = ["MobilenetV1/Predictions/Softmax"]
-converter = tf.contrib.lite.TocoConverter.from_frozen_graph(
+converter = tf.contrib.lite.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
@@ -113,25 +110,26 @@ FlatBuffer.
```python
import tensorflow as tf
-converter = tf.contrib.lite.TocoConverter.from_saved_model(saved_model_dir)
+converter = tf.contrib.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
```
For more complex SavedModels, the optional parameters that can be passed into
-`TocoConverter.from_saved_model()` are `input_arrays`, `input_shapes`,
+`TFLiteConverter.from_saved_model()` are `input_arrays`, `input_shapes`,
`output_arrays`, `tag_set` and `signature_key`. Details of each parameter are
-available by running `help(tf.contrib.lite.TocoConverter)`.
+available by running `help(tf.contrib.lite.TFLiteConverter)`.
### Exporting a tf.keras File <a name="basic-keras-file"></a>
The following example shows how to convert a `tf.keras` model into a TensorFlow
-Lite FlatBuffer.
+Lite FlatBuffer. This example requires
+[`h5py`](http://docs.h5py.org/en/latest/build.html) to be installed.
```python
import tensorflow as tf
-converter = tf.contrib.lite.TocoConverter.from_keras_model_file("keras_model.h5")
+converter = tf.contrib.lite.TFLiteConverter.from_keras_model_file("keras_model.h5")
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
```
@@ -163,7 +161,7 @@ keras_file = "keras_model.h5"
tf.keras.models.save_model(model, keras_file)
# Convert to TensorFlow Lite model.
-converter = tf.contrib.lite.TocoConverter.from_keras_model_file(keras_file)
+converter = tf.contrib.lite.TFLiteConverter.from_keras_model_file(keras_file)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
```
@@ -173,7 +171,7 @@ open("converted_model.tflite", "wb").write(tflite_model)
For models where the default value of the attributes is not sufficient, the
attribute's values should be set before calling `convert()`. In order to call
any constants use `tf.contrib.lite.constants.<CONSTANT_NAME>` as seen below with
-`QUANTIZED_UINT8`. Run `help(tf.contrib.lite.TocoConverter)` in the Python
+`QUANTIZED_UINT8`. Run `help(tf.contrib.lite.TFLiteConverter)` in the Python
terminal for detailed documentation on the attributes.
Although the examples are demonstrated on GraphDefs containing only constants.
@@ -193,7 +191,7 @@ val = img + const
out = tf.fake_quant_with_min_max_args(val, min=0., max=1., name="output")
with tf.Session() as sess:
- converter = tf.contrib.lite.TocoConverter.from_session(sess, [img], [out])
+ converter = tf.contrib.lite.TFLiteConverter.from_session(sess, [img], [out])
converter.inference_type = tf.contrib.lite.constants.QUANTIZED_UINT8
input_arrays = converter.get_input_arrays()
converter.quantized_input_stats = {input_arrays[0] : (0., 1.)} # mean, std_dev
@@ -250,7 +248,7 @@ val = img + const
out = tf.identity(val, name="out")
with tf.Session() as sess:
- converter = tf.contrib.lite.TocoConverter.from_session(sess, [img], [out])
+ converter = tf.contrib.lite.TFLiteConverter.from_session(sess, [img], [out])
tflite_model = converter.convert()
# Load TFLite model and allocate tensors.
@@ -262,13 +260,20 @@ interpreter.allocate_tensors()
### Build from source code <a name="latest-package"></a>
-In order to run the latest version of the TOCO Python API, clone the TensorFlow
-repository, configure the installation, and build and install the pip package.
-Detailed instructions are available
-[here](https://www.tensorflow.org/install/source).
+In order to run the latest version of the TensorFlow Lite Converter Python API,
+either install the nightly build with
+[pip](https://www.tensorflow.org/install/pip) (recommended) or
+[Docker](https://www.tensorflow.org/install/docker), or
+[build the pip package from source](https://www.tensorflow.org/install/source).
+
+### Converting models in TensorFlow 1.9 to TensorFlow 1.11 <a name="#pre-tensorflow-1.11"></a>
+
+To convert TensorFlow models to TensorFlow Lite in TensorFlow 1.9 through
+TensorFlow 1.11, use `TocoConverter`. `TocoConverter` is semantically
+identically to `TFLiteConverter`.
-### Converting models prior to TensorFlow 1.9. <a name="pre-tensorflow-1.9"></a>
+### Converting models prior to TensorFlow 1.9 <a name="pre-tensorflow-1.9"></a>
-To use TOCO in TensorFlow 1.7 and TensorFlow 1.8, use the `toco_convert`
-function. Run `help(tf.contrib.lite.toco_convert)` to get details about accepted
-parameters.
+To convert TensorFlow models to TensorFlow Lite in TensorFlow 1.7 and TensorFlow
+1.8, use the `toco_convert` function. Run `help(tf.contrib.lite.toco_convert)`
+to get details about accepted parameters.