diff options
author | 2016-11-22 08:44:55 -0800 | |
---|---|---|
committer | 2016-11-22 09:05:10 -0800 | |
commit | da6fde1b963b850c53f54210dc65c265f5d3cb3e (patch) | |
tree | 5ec0b52749a34ee0fb505afed09301df00e53df7 | |
parent | c9b6ce1d03905c635fbda6323fbd3b374a7c19ca (diff) |
fixed typo tf.nn.(sparse_)softmax_cross_entropy_with_logits
Change: 139913751
-rw-r--r-- | tensorflow/g3doc/tutorials/mnist/beginners/index.md | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/tensorflow/g3doc/tutorials/mnist/beginners/index.md b/tensorflow/g3doc/tutorials/mnist/beginners/index.md index 5d3d6d42e3..e5d3f28de6 100644 --- a/tensorflow/g3doc/tutorials/mnist/beginners/index.md +++ b/tensorflow/g3doc/tutorials/mnist/beginners/index.md @@ -343,13 +343,13 @@ each element of `y_` with the corresponding element of `tf.log(y)`. Then `reduction_indices=[1]` parameter. Finally, `tf.reduce_mean` computes the mean over all the examples in the batch. -(Note that in the source code, we don't use this formulation, because it is +Note that in the source code, we don't use this formulation, because it is numerically unstable. Instead, we apply `tf.nn.softmax_cross_entropy_with_logits` on the unnormalized logits (e.g., we call `softmax_cross_entropy_with_logits` on `tf.matmul(x, W) + b`), because this more numerically stable function internally computes the softmax activation. In -your code, consider using tf.nn.(sparse_)softmax_cross_entropy_with_logits -instead). +your code, consider using `tf.nn.softmax_cross_entropy_with_logits` +instead. Now that we know what we want our model to do, it's very easy to have TensorFlow train it to do so. Because TensorFlow knows the entire graph of your |