aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src/mobile/index.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/docs_src/mobile/index.md')
-rw-r--r--tensorflow/docs_src/mobile/index.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/tensorflow/docs_src/mobile/index.md b/tensorflow/docs_src/mobile/index.md
index 06ad47bc62..a6f1422f6f 100644
--- a/tensorflow/docs_src/mobile/index.md
+++ b/tensorflow/docs_src/mobile/index.md
@@ -35,8 +35,8 @@ speech-driven interface, and many of these require on-device processing. Most of
the time a user isn’t giving commands, and so streaming audio continuously to a
remote server would be a waste of bandwidth, since it would mostly be silence or
background noises. To solve this problem it’s common to have a small neural
-network running on-device @{$tutorials/audio_recognition$listening out for a particular keyword}.
-Once that keyword has been spotted, the rest of the
+network running on-device @{$tutorials/audio_recognition$listening out for a
+particular keyword}. Once that keyword has been spotted, the rest of the
conversation can be transmitted over to the server for further processing if
more computing power is needed.