aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/contrib/lite/toco/README.md
blob: 2db6a627ab59604a99cafe3b38df08b70092d989 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# TOCO: TensorFlow Lite Optimizing Converter

The TensorFlow Lite Optimizing Converter converts TensorFlow graphs into
TensorFlow Lite graphs. There are additional usages that are also detailed in
the usage documentation.

## Usage documentation

Usage information is given in these documents:

*   [Command-line glossary](g3doc/cmdline_reference.md)
*   [Command-line examples](g3doc/cmdline_examples.md)
*   [Python API examples](g3doc/python_api.md)

## Where the converter fits in the TensorFlow landscape

Once an application developer has a trained TensorFlow model, TOCO will accept
that model and generate a TensorFlow Lite
[FlatBuffer](https://google.github.io/flatbuffers/) file. TOCO currently supports
[SavedModels](https://www.tensorflow.org/guide/saved_model#using_savedmodel_with_estimators),
frozen graphs (models generated via
[freeze_graph.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py)),
and `tf.Keras` model files.  The TensorFlow Lite FlatBuffer file can be shipped
to client devices, generally mobile devices, where the TensorFlow Lite
interpreter handles them on-device.  This flow is represented in the diagram
below.

![drawing](g3doc/toco_landscape.svg)