diff options
author | 2018-01-12 09:57:25 -0800 | |
---|---|---|
committer | 2018-01-12 10:04:00 -0800 | |
commit | 922e51978f40a3fc207c2ab0a5ed5964fdd0bba7 (patch) | |
tree | 8e0846d5323e013d3373feec189ea43210d4789d /tensorflow/contrib/lite/README.md | |
parent | ecae1d72ec226b542e263222b92cae60f37c1e30 (diff) |
Add python script that can visualize models by producing an HTML page.
PiperOrigin-RevId: 181756421
Diffstat (limited to 'tensorflow/contrib/lite/README.md')
-rw-r--r-- | tensorflow/contrib/lite/README.md | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/tensorflow/contrib/lite/README.md b/tensorflow/contrib/lite/README.md index 852284cbc7..55a524b207 100644 --- a/tensorflow/contrib/lite/README.md +++ b/tensorflow/contrib/lite/README.md @@ -188,7 +188,7 @@ bazel-bin/tensorflow/contrib/lite/toco/toco -- \ Note, it is also possible to use the Tensorflow Optimizing Converter through protos either from Python or from the command line see the documentation [here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/toco/python/toco_from_protos.py). A developer can then integrate the conversion step into their model design workflow to ensure that a model will be easily convertible to a mobile inference graph. For example, -``` +```python import tensorflow as tf img = tf.placeholder(name="img", dtype=tf.float32, shape=(1, 64, 64, 3)) @@ -203,6 +203,12 @@ For detailed instructions on how to use the Tensorflow Optimizing Converter, ple You may refer to the [Ops compatibility guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/g3doc/tf_ops_compatibility.md) for troubleshooting help. If that doesn't help, please file an [issue](https://github.com/tensorflow/tensorflow/issues). +If you would like to see a visual description of your TensorFlow Lite model after conversion, you can use tensorflow/contrib/lite/tools/visualize.py by running +```sh +bazel run tensorflow/contrib/lite/tools:visualize -- model.tflite model_viz.html +``` +and then visualize the resulting HTML file in a browser. + ## Step 3. Use the TensorFlow Lite model for inference in a mobile app After completion of Step 2 the developer should have a .lite model. |