aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/contrib/lite/README.md
diff options
context:
space:
mode:
authorGravatar A. Unique TensorFlower <gardener@tensorflow.org>2017-12-15 17:32:50 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2017-12-15 17:39:26 -0800
commit9648f8040a559f6cf9bbe0501ba96f2b2c2864b1 (patch)
tree57dc6e959e0a534622eaf392ee43b7691378b10e /tensorflow/contrib/lite/README.md
parent5b5445b9a7aa2664a90c4fc946ecf268c971425b (diff)
Automated g4 rollback of changelist 179258973
PiperOrigin-RevId: 179260538
Diffstat (limited to 'tensorflow/contrib/lite/README.md')
-rw-r--r--tensorflow/contrib/lite/README.md5
1 files changed, 0 insertions, 5 deletions
diff --git a/tensorflow/contrib/lite/README.md b/tensorflow/contrib/lite/README.md
index 2fb40070cb..fc9144d5fc 100644
--- a/tensorflow/contrib/lite/README.md
+++ b/tensorflow/contrib/lite/README.md
@@ -167,7 +167,6 @@ graphviz, or [in tensorboard](https://codelabs.developers.google.com/codelabs/te
This frozen Graphdef is now ready to be converted to flatbuffer format (.lite) for use on Android or iOS. On Android users have the flexibility to use either the float or quantized versions of the frozen graphdef, if available, using the Tensorflow Optimizing Converter tool.
Here is a sample command line to convert the frozen Graphdef to '.lite' format for The Tensorflow Optimizing Converter supports both float and quantized models, however, different configuration parameters are needed depending on whether a FLOAT or QUANTIZED mode is being used.
-(Here is a link to the pb [file](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_1.0_224_frozen.tgz)).
```
bazel build tensorflow/contrib/lite/toco:toco
@@ -216,7 +215,3 @@ Note that you'd need to follow instructions for installing TensorFlow on Android
### For iOS
Follow the documentation [here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/g3doc/ios.md) to get integrate a TFLite model into your app.
-
-## Core ML support
-
-Core ML is a machine learning framework used across Apple products. In addition to using Tensorflow Lite models directly in their applications, developers have the option to convert their trained Tensorflow models to the [CoreML](https://developer.apple.com/machine-learning/) format for use on Apple devices. For information on how to use the converter please refer to the [Tensorflow-CoreML converter documentation](https://github.com/tf-coreml/tf-coreml).