aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md
diff options
context:
space:
mode:
authorGravatar Nupur Garg <nupurgarg@google.com>2018-06-22 16:12:02 -0700
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2018-06-22 16:14:54 -0700
commitaa63c6b0209c1194a392e4327f6211c7c13f40dc (patch)
tree4ef182b6607447fd19ed6e4625ab79a732fe4b23 /tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md
parent8e1a4a5b03357c0397c578611482c69065e7bdc6 (diff)
Update TOCO command line documentation.
PiperOrigin-RevId: 201756935
Diffstat (limited to 'tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md')
-rw-r--r--tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md347
1 files changed, 141 insertions, 206 deletions
diff --git a/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md b/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md
index 8e93f02ef1..767cf48a9f 100644
--- a/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md
+++ b/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md
@@ -9,57 +9,49 @@ complemented by the following documents:
Table of contents:
-* [Convert a TensorFlow SavedModel to TensorFlow Lite](#savedmodel)
-* [Convert a TensorFlow GraphDef to TensorFlow Lite for float
- inference](#graphdef-float)
+* [Command-line tools](#tools)
+ * [Converting models prior to TensorFlow 1.9.](#pre-tensorflow-1.9)
+* [Convert a TensorFlow GraphDef](#graphdef)
+* [Convert a TensorFlow SavedModel](#savedmodel)
* [Quantization](#quantization)
- * [Convert a TensorFlow GraphDef to TensorFlow Lite for quantized
- inference](#graphdef-quant)
+ * [Convert a TensorFlow GraphDef for quantized inference](#graphdef-quant)
* [Use "dummy-quantization" to try out quantized inference on a float
graph](#dummy-quant)
* [Specifying input and output arrays](#specifying-input-and-output-arrays)
- * [Multiple output arrays](#multiple-output-arrays)
* [Multiple input arrays](#multiple-input-arrays)
+ * [Multiple output arrays](#multiple-output-arrays)
* [Specifying subgraphs](#specifying-subgraphs)
-* [Other conversions supported by TOCO](#other-conversions)
- * [Optimize a TensorFlow GraphDef](#optimize-graphdef)
- * [Convert a TensorFlow Lite FlatBuffer back into TensorFlow GraphDef
- format](#to-graphdef)
-* [Logging](#logging)
- * [Graph "video" logging](#graph-video-logging)
* [Graph visualizations](#graph-visualizations)
* [Using --output_format=GRAPHVIZ_DOT](#using-output-formatgraphviz-dot)
* [Using --dump_graphviz](#using-dump-graphviz)
+ * [Graph "video" logging](#graph-video-logging)
* [Legend for the graph visualizations](#graphviz-legend)
-## Convert a TensorFlow SavedModel to TensorFlow Lite <a name="savedmodel"></a>
+## Command-line tools <a name="tools"></a>
-The follow example converts a basic TensorFlow SavedModel into a Tensorflow Lite
-FlatBuffer to perform floating-point inference.
+There are two approaches to running TOCO via command line.
-```
-bazel run --config=opt \
- third_party/tensorflow/contrib/lite/toco:toco -- \
- --savedmodel_directory=/tmp/saved_model \
- --output_file=/tmp/foo.tflite
-```
+* `tflite_convert`: Starting from TensorFlow 1.9, the command-line tool
+ `tflite_convert` will be installed as part of the Python package. All of the
+ examples below use `tflite_convert` for simplicity.
+ * Example: `tflite --output_file=...`
+* `bazel`: In order to run the latest version of TOCO, [clone the TensorFlow
+ repository](https://www.tensorflow.org/install/install_sources#clone_the_tensorflow_repository)
+ and use `bazel`. This is the recommended approach for converting models that
+ utilize new features that were not supported by TOCO in TensorFlow 1.9.
+ * Example: `bazel run
+ //tensorflow/contrib/lite/python:tflite_convert --
+ --output_file=...`
-[SavedModel](https://www.tensorflow.org/programmers_guide/saved_model#using_savedmodel_with_estimators)
-has fewer required flags than frozen graphs (described [below](#graphdef-float))
-due to access to additional data contained within the SavedModel. The values for
-`--input_arrays` and `--output_arrays` are an aggregated, alphabetized list of
-the inputs and outputs in the
-[SignatureDefs](https://www.tensorflow.org/serving/signature_defs) within the
-[MetaGraphDef](https://www.tensorflow.org/programmers_guide/saved_model#apis_to_build_and_load_a_savedmodel)
-specified by `--savedmodel_tagset`. The value for `input_shapes` is
-automatically determined from the MetaGraphDef whenever possible. The default
-value for `--inference_type` for SavedModels is `FLOAT`.
+### Converting models prior to TensorFlow 1.9. <a name="pre-tensorflow-1.9"></a>
-There is currently no support for MetaGraphDefs without a SignatureDef or for
-MetaGraphDefs that use the [`assets/`
-directory](https://www.tensorflow.org/programmers_guide/saved_model#structure_of_a_savedmodel_directory).
+The recommended approach for using TOCO prior to TensorFlow 1.9 is the [Python
+API](python_api.md#pre-tensorflow-1.9). If a command line tool is desired, the
+`toco` command line tool was available in TensorFlow 1.7. Enter `toco --help` in
+Terminal for additional details on the command-line flags available. There were
+no command line tools in TensorFlow 1.8.
-## Convert a TensorFlow GraphDef to TensorFlow Lite for float inference <a name="graphdef-float"></a>
+## Convert a TensorFlow GraphDef <a name="graphdef"></a>
The follow example converts a basic TensorFlow GraphDef (frozen by
[freeze_graph.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py))
@@ -69,19 +61,43 @@ graphs contain the variables stored in Checkpoint files as Const ops.
```
curl https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_0.50_128_frozen.tgz \
| tar xzv -C /tmp
-bazel run --config=opt \
- //tensorflow/contrib/lite/toco:toco -- \
- --input_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
+tflite_convert \
--output_file=/tmp/foo.tflite \
- --inference_type=FLOAT \
- --input_shape=1,128,128,3 \
- --input_array=input \
- --output_array=MobilenetV1/Predictions/Reshape_1
+ --graph_def_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
+ --input_arrays=input \
+ --output_arrays=MobilenetV1/Predictions/Reshape_1
+```
+
+The value for `input_shapes` is automatically determined whenever possible.
+
+## Convert a TensorFlow SavedModel <a name="savedmodel"></a>
+
+The follow example converts a basic TensorFlow SavedModel into a Tensorflow Lite
+FlatBuffer to perform floating-point inference.
+
+```
+tflite_convert \
+ --output_file=/tmp/foo.tflite \
+ --saved_model_dir=/tmp/saved_model
```
+[SavedModel](https://www.tensorflow.org/programmers_guide/saved_model#using_savedmodel_with_estimators)
+has fewer required flags than frozen graphs due to access to additional data
+contained within the SavedModel. The values for `--input_arrays` and
+`--output_arrays` are an aggregated, alphabetized list of the inputs and outputs
+in the [SignatureDefs](https://www.tensorflow.org/serving/signature_defs) within
+the
+[MetaGraphDef](https://www.tensorflow.org/programmers_guide/saved_model#apis_to_build_and_load_a_savedmodel)
+specified by `--saved_model_tag_set`. As with the GraphDef, the value for
+`input_shapes` is automatically determined whenever possible.
+
+There is currently no support for MetaGraphDefs without a SignatureDef or for
+MetaGraphDefs that use the [`assets/`
+directory](https://www.tensorflow.org/programmers_guide/saved_model#structure_of_a_savedmodel_directory).
+
## Quantization
-### Convert a TensorFlow GraphDef to TensorFlow Lite for quantized inference <a name="graphdef-quant"></a>
+### Convert a TensorFlow GraphDef for quantized inference <a name="graphdef-quant"></a>
TOCO is compatible with fixed point quantization models described
[here](https://www.tensorflow.org/performance/quantization). These are float
@@ -95,18 +111,14 @@ The following command generates a quantized TensorFlow Lite FlatBuffer from a
"quantized" TensorFlow GraphDef.
```
-bazel run --config=opt \
- //tensorflow/contrib/lite/toco:toco -- \
- --input_file=/tmp/some_quantized_graph.pb \
+tflite_convert \
--output_file=/tmp/foo.tflite \
- --input_format=TENSORFLOW_GRAPHDEF \
- --output_format=TFLITE \
+ --graph_def_file=/tmp/some_quantized_graph.pb \
--inference_type=QUANTIZED_UINT8 \
- --input_shape=1,128,128,3 \
- --input_array=input \
- --output_array=MobilenetV1/Predictions/Reshape_1 \
- --mean_value=128 \
- --std_value=127
+ --input_arrays=input \
+ --output_arrays=MobilenetV1/Predictions/Reshape_1 \
+ --mean_values=128 \
+ --std_dev_values=127
```
### Use \"dummy-quantization\" to try out quantized inference on a float graph <a name="dummy-quant"></a>
@@ -124,45 +136,20 @@ a reasonable guess is that most activation ranges should be contained in [0, 6].
```
curl https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_0.50_128_frozen.tgz \
| tar xzv -C /tmp
-bazel run --config=opt \
- //tensorflow/contrib/lite/toco:toco -- \
- --input_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
+tflite_convert \
--output_file=/tmp/foo.cc \
- --input_format=TENSORFLOW_GRAPHDEF \
- --output_format=TFLITE \
+ --graph_def_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
--inference_type=QUANTIZED_UINT8 \
- --input_shape=1,128,128,3 \
- --input_array=input \
- --output_array=MobilenetV1/Predictions/Reshape_1 \
+ --input_arrays=input \
+ --output_arrays=MobilenetV1/Predictions/Reshape_1 \
--default_ranges_min=0 \
--default_ranges_max=6 \
- --mean_value=127.5 \
- --std_value=127.5
+ --mean_values=128 \
+ --std_dev_values=127
```
## Specifying input and output arrays
-### Multiple output arrays
-
-The flag `output_arrays` takes in a comma-separated list of output arrays as
-seen in the example below. This is useful for models or subgraphs with multiple
-outputs.
-
-```
-curl https://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz \
- | tar xzv -C /tmp
-bazel run --config=opt \
- //tensorflow/contrib/lite/toco:toco -- \
- --input_file=/tmp/inception_v1_2016_08_28_frozen.pb \
- --output_file=/tmp/foo.tflite \
- --input_format=TENSORFLOW_GRAPHDEF \
- --output_format=TFLITE \
- --inference_type=FLOAT \
- --input_shape=1,224,224,3 \
- --input_array=input \
- --output_arrays=InceptionV1/InceptionV1/Mixed_3b/Branch_1/Conv2d_0a_1x1/Relu,InceptionV1/InceptionV1/Mixed_3b/Branch_2/Conv2d_0a_1x1/Relu
-```
-
### Multiple input arrays
The flag `input_arrays` takes in a comma-separated list of input arrays as seen
@@ -172,21 +159,33 @@ inputs.
```
curl https://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz \
| tar xzv -C /tmp
-bazel run --config=opt \
- //tensorflow/contrib/lite/toco:toco -- \
- --input_file=/tmp/inception_v1_2016_08_28_frozen.pb \
+tflite_convert \
+ --graph_def_file=/tmp/inception_v1_2016_08_28_frozen.pb \
--output_file=/tmp/foo.tflite \
- --input_format=TENSORFLOW_GRAPHDEF \
- --output_format=TFLITE \
- --inference_type=FLOAT \
--input_shapes=1,28,28,96:1,28,28,16:1,28,28,192:1,28,28,64 \
--input_arrays=InceptionV1/InceptionV1/Mixed_3b/Branch_1/Conv2d_0a_1x1/Relu,InceptionV1/InceptionV1/Mixed_3b/Branch_2/Conv2d_0a_1x1/Relu,InceptionV1/InceptionV1/Mixed_3b/Branch_3/MaxPool_0a_3x3/MaxPool,InceptionV1/InceptionV1/Mixed_3b/Branch_0/Conv2d_0a_1x1/Relu \
- --output_array=InceptionV1/Logits/Predictions/Reshape_1
+ --output_arrays=InceptionV1/Logits/Predictions/Reshape_1
```
Note that `input_shapes` is provided as a colon-separated list. Each input shape
corresponds to the input array at the same position in the respective list.
+### Multiple output arrays
+
+The flag `output_arrays` takes in a comma-separated list of output arrays as
+seen in the example below. This is useful for models or subgraphs with multiple
+outputs.
+
+```
+curl https://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz \
+ | tar xzv -C /tmp
+tflite_convert \
+ --graph_def_file=/tmp/inception_v1_2016_08_28_frozen.pb \
+ --output_file=/tmp/foo.tflite \
+ --input_arrays=input \
+ --output_arrays=InceptionV1/InceptionV1/Mixed_3b/Branch_1/Conv2d_0a_1x1/Relu,InceptionV1/InceptionV1/Mixed_3b/Branch_2/Conv2d_0a_1x1/Relu
+```
+
### Specifying subgraphs
Any array in the input file can be specified as an input or output array in
@@ -201,115 +200,57 @@ GraphDef.
```
curl https://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz \
| tar xzv -C /tmp
-bazel run --config=opt \
- //tensorflow/contrib/lite/toco:toco -- \
- --input_file=/tmp/inception_v1_2016_08_28_frozen.pb \
+tflite_convert \
+ --graph_def_file=/tmp/inception_v1_2016_08_28_frozen.pb \
--output_file=/tmp/foo.pb \
- --input_format=TENSORFLOW_GRAPHDEF \
- --output_format=TENSORFLOW_GRAPHDEF \
--input_shapes=1,28,28,96:1,28,28,16:1,28,28,192:1,28,28,64 \
--input_arrays=InceptionV1/InceptionV1/Mixed_3b/Branch_1/Conv2d_0a_1x1/Relu,InceptionV1/InceptionV1/Mixed_3b/Branch_2/Conv2d_0a_1x1/Relu,InceptionV1/InceptionV1/Mixed_3b/Branch_3/MaxPool_0a_3x3/MaxPool,InceptionV1/InceptionV1/Mixed_3b/Branch_0/Conv2d_0a_1x1/Relu \
- --output_array=InceptionV1/InceptionV1/Mixed_3b/concat_v2
+ --output_arrays=InceptionV1/InceptionV1/Mixed_3b/concat_v2
```
-Note that the final representation of an on-device inference workload (say, in
-TensorFlow Lite FlatBuffers format) tends to have coarser granularity than the
-very fine granularity of the TensorFlow GraphDef representation. For example,
-while a fully-connected layer is typically represented as at least four separate
-ops in TensorFlow GraphDef (Reshape, MatMul, BiasAdd, Relu...), it is typically
-represented as a single "fused" op (FullyConnected) in the converter's optimized
-representation and in the final on-device representation (e.g. in TensorFlow
-Lite FlatBuffer format). As the level of granularity gets coarser, some
+Note that the final representation in TensorFlow Lite FlatBuffers tends to have
+coarser granularity than the very fine granularity of the TensorFlow GraphDef
+representation. For example, while a fully-connected layer is typically
+represented as at least four separate ops in TensorFlow GraphDef (Reshape,
+MatMul, BiasAdd, Relu...), it is typically represented as a single "fused" op
+(FullyConnected) in the converter's optimized representation and in the final
+on-device representation. As the level of granularity gets coarser, some
intermediate arrays (say, the array between the MatMul and the BiasAdd in the
-TensorFlow GraphDef) are dropped. When specifying intermediate arrays as
-`--input_arrays` / `--output_arrays`, it is desirable (and often required) to
-specify arrays that are meant to survive in the final form of the graph, after
-fusing. These are typically the outputs of activation functions (since
-everything in each layer until the activation function tends to get fused).
-
-## Other conversions supported by TOCO <a name="other-conversions"></a>
-
-The converter accepts both TENSORFLOW_GRAPHDEF and TFLITE file formats as both
-`--input_format` and `--output_format`. This means that conversion to and from
-any supported format is possible.
+TensorFlow GraphDef) are dropped.
-### Optimize a TensorFlow GraphDef <a name="optimize-graphdef"></a>
-
-Same-format "conversions" can be used to optimize and simplify a graph or be
-used to [get a subgraph](#specifying-subgraphs) of a graph. The flag
-`--inference_type` is not required because TensorFlow graphs, including those
-containing the
-[`FakeQuant*`](https://www.tensorflow.org/api_guides/python/array_ops#Fake_quantization)
-ops are always float graphs.
-
-```
-curl https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_0.50_128_frozen.tgz \
- | tar xzv -C /tmp
-bazel run --config=opt \
- //tensorflow/contrib/lite/toco:toco -- \
- --input_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
- --output_file=/tmp/foo.pb \
- --input_format=TENSORFLOW_GRAPHDEF \
- --output_format=TENSORFLOW_GRAPHDEF \
- --input_shape=1,128,128,3 \
- --input_array=input \
- --output_array=MobilenetV1/Predictions/Reshape_1
-```
-
-### Convert a TensorFlow Lite FlatBuffer back into TensorFlow GraphDef format <a name="to-graphdef"></a>
-
-The converter supports file format conversions from TensorFlow Lite, back into
-TensorFlow GraphDef format.
-
-```
-bazel run --config=opt \
- //tensorflow/contrib/lite/toco:toco -- \
- --input_file=/tmp/foo.tflite \
- --output_file=/tmp/foo.pb \
- --input_format=TFLITE \
- --output_format=TENSORFLOW_GRAPHDEF \
- --input_shape=1,128,128,3 \
- --input_array=input \
- --output_array=MobilenetV1/Predictions/Reshape_1
-```
+When specifying intermediate arrays as `--input_arrays` and `--output_arrays`,
+it is desirable (and often required) to specify arrays that are meant to survive
+in the final form of the graph, after fusing. These are typically the outputs of
+activation functions (since everything in each layer until the activation
+function tends to get fused).
## Logging
-### Graph "video" logging
-
-When `--dump_graphviz=` is used (see the section on [graph
-visualizations](#graph-visualizations)), one may additionally pass
-`--dump_graphviz_video`, which causes a graph visualization to be dumped after
-each individual graph transformation. This results in thousands of files.
-Typically, one would then bisect into these files to understand when a given
-change was introduced in the graph.
## Graph visualizations
TOCO can export a graph to the GraphViz Dot format for easy visualization via
-either the `--output_format` flag or the `--dump_graphviz` flag. The subsections
-below outline the use cases for each.
+either the `--output_format` flag or the `--dump_graphviz_dir` flag. The
+subsections below outline the use cases for each.
### Using `--output_format=GRAPHVIZ_DOT`
The first way to get a graphviz rendering is to pass `GRAPHVIZ_DOT` into
`--output_format`. This results in a plausible visualization of the graph. This
-reduces the requirements that normally exist during conversion between other
-input and output formats. For example, this may be useful if conversion from
-TENSORFLOW_GRAPHDEF to TFLITE is failing.
+reduces the requirements that exist during conversion between other input and
+output formats. This may be useful if conversion from TENSORFLOW_GRAPHDEF to
+TFLITE is failing.
```
curl https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_0.50_128_frozen.tgz \
| tar xzv -C /tmp
-bazel run --config=opt \
- //tensorflow/contrib/lite/toco:toco -- \
- --input_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
+tflite_convert \
+ --graph_def_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
--output_file=/tmp/foo.dot \
- --input_format=TENSORFLOW_GRAPHDEF \
--output_format=GRAPHVIZ_DOT \
--input_shape=1,128,128,3 \
- --input_array=input \
- --output_array=MobilenetV1/Predictions/Reshape_1
+ --input_arrays=input \
+ --output_arrays=MobilenetV1/Predictions/Reshape_1
```
The resulting `.dot` file can be rendered into a PDF as follows:
@@ -330,49 +271,35 @@ Example PDF files are viewable online in the next section.
### Using `--dump_graphviz`
-The second way to get a graphviz rendering is to pass the `--dump_graphviz=`
+The second way to get a graphviz rendering is to pass the `--dump_graphviz_dir`
flag, specifying a destination directory to dump GraphViz rendering to. Unlike
-the previous approach, this one allows you to keep your real command-line (with
-your real `--output_format` and other flags) unchanged, just appending a
-`--dump_graphviz=` flag to it. This provides a visualization of the actual graph
-during a specific conversion process.
+the previous approach, this one retains the original output format. This
+provides a visualization of the actual graph resulting from a specific
+conversion process.
```
curl https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_0.50_128_frozen.tgz \
| tar xzv -C /tmp
-bazel run --config=opt \
- //tensorflow/contrib/lite/toco:toco -- \
- --input_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
+tflite_convert \
+ --graph_def_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
--output_file=/tmp/foo.tflite \
- --input_format=TENSORFLOW_GRAPHDEF \
- --output_format=TFLITE \
- --inference_type=FLOAT \
- --input_shape=1,128,128,3 \
- --input_array=input \
- --output_array=MobilenetV1/Predictions/Reshape_1 \
- --dump_graphviz=/tmp
-```
-
-This generates a few files in the destination directory, here `/tmp`. The two
-most important files are:
-
-```
-/tmp/toco_AT_IMPORT.dot
-/tmp/toco_AFTER_TRANSFORMATIONS.dot
+ --input_arrays=input \
+ --output_arrays=MobilenetV1/Predictions/Reshape_1 \
+ --dump_graphviz_dir=/tmp
```
-`toco_AT_IMPORT.dot` represents the graph as it was imported from
-`--input_file`, before any transformation was applied to it (besides some
-transformations that are applied immediately while importing). This tends to be
-a complex visualization with limited information, but is useful especially in
-situations where a conversion command fails (this file is generated even if the
-conversion subsequently fails).
+This generates a few files in the destination directory. The two most important
+files are `toco_AT_IMPORT.dot` and `/tmp/toco_AFTER_TRANSFORMATIONS.dot`.
+`toco_AT_IMPORT.dot` represents the original graph containing only the
+transformations done at import time. This tends to be a complex visualization
+with limited information about each node. It is useful in situations where a
+conversion command fails.
`toco_AFTER_TRANSFORMATIONS.dot` represents the graph after all transformations
-were applied to it, just before it was exported to the `--output_file`.
-Typically, this is a much smaller graph with more information about each node.
+were applied to it, just before it is exported. Typically, this is a much
+smaller graph with more information about each node.
-Again, these can be rendered to PDFs:
+As before, these can be rendered to PDFs:
```
dot -Tpdf -O /tmp/toco_*.dot
@@ -383,6 +310,14 @@ Sample output files can be seen here:
* [toco_AT_IMPORT.dot.pdf](https://storage.googleapis.com/download.tensorflow.org/example_images/toco_AT_IMPORT.dot.pdf)
* [toco_AFTER_TRANSFORMATIONS.dot.pdf](https://storage.googleapis.com/download.tensorflow.org/example_images/toco_AFTER_TRANSFORMATIONS.dot.pdf).
+### Graph "video" logging
+
+When `--dump_graphviz_dir` is used, one may additionally pass
+`--dump_graphviz_video`. This causes a graph visualization to be dumped after
+each individual graph transformation, resulting in thousands of files.
+Typically, one would then bisect into these files to understand when a given
+change was introduced in the graph.
+
### Legend for the graph visualizations <a name="graphviz-legend"></a>
* Operators are red square boxes with the following hues of red: