aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc/tutorials/image_recognition/index.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/g3doc/tutorials/image_recognition/index.md')
-rw-r--r--tensorflow/g3doc/tutorials/image_recognition/index.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/tensorflow/g3doc/tutorials/image_recognition/index.md b/tensorflow/g3doc/tutorials/image_recognition/index.md
index 1d20d7ddb3..d4fa5ba780 100644
--- a/tensorflow/g3doc/tutorials/image_recognition/index.md
+++ b/tensorflow/g3doc/tutorials/image_recognition/index.md
@@ -42,8 +42,8 @@ For example, here are the results from [AlexNet] classifying some images:
To compare models, we examine how often the model fails to predict the
correct answer as one of their top 5 guesses -- termed "top-5 error rate".
[AlexNet] achieved by setting a top-5 error rate of 15.3% on the 2012
-validation data set; [BN-Inception-v2] achieved 6.66%;
-[Inception-v3] reaches 3.46%.
+validation data set; [Inception (GoogLeNet)] achieved 6.67%;
+[BN-Inception-v2] achieved 4.9%; [Inception-v3] reaches 3.46%.
> How well do humans do on ImageNet Challenge? There's a [blog post] by
Andrej Karpathy who attempted to measure his own performance. He reached