From da6fde1b963b850c53f54210dc65c265f5d3cb3e Mon Sep 17 00:00:00 2001 From: "A. Unique TensorFlower" Date: Tue, 22 Nov 2016 08:44:55 -0800 Subject: fixed typo tf.nn.(sparse_)softmax_cross_entropy_with_logits Change: 139913751 --- tensorflow/g3doc/tutorials/mnist/beginners/index.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/tensorflow/g3doc/tutorials/mnist/beginners/index.md b/tensorflow/g3doc/tutorials/mnist/beginners/index.md index 5d3d6d42e3..e5d3f28de6 100644 --- a/tensorflow/g3doc/tutorials/mnist/beginners/index.md +++ b/tensorflow/g3doc/tutorials/mnist/beginners/index.md @@ -343,13 +343,13 @@ each element of `y_` with the corresponding element of `tf.log(y)`. Then `reduction_indices=[1]` parameter. Finally, `tf.reduce_mean` computes the mean over all the examples in the batch. -(Note that in the source code, we don't use this formulation, because it is +Note that in the source code, we don't use this formulation, because it is numerically unstable. Instead, we apply `tf.nn.softmax_cross_entropy_with_logits` on the unnormalized logits (e.g., we call `softmax_cross_entropy_with_logits` on `tf.matmul(x, W) + b`), because this more numerically stable function internally computes the softmax activation. In -your code, consider using tf.nn.(sparse_)softmax_cross_entropy_with_logits -instead). +your code, consider using `tf.nn.softmax_cross_entropy_with_logits` +instead. Now that we know what we want our model to do, it's very easy to have TensorFlow train it to do so. Because TensorFlow knows the entire graph of your -- cgit v1.2.3