aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc/api_docs/python/nn.md
diff options
context:
space:
mode:
authorGravatar A. Unique TensorFlower <gardener@tensorflow.org>2016-09-21 13:36:30 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2016-09-21 14:48:27 -0700
commit5a080dcf939908018a69d666112c48d971b76e9d (patch)
tree47170176b0a9dc4bb3c7b28be946920f9e71274f /tensorflow/g3doc/api_docs/python/nn.md
parent6b2c906a1e71c2a7e0d46b39e4732ea89bd1d2de (diff)
Update generated Python Op docs.
Change: 133877144
Diffstat (limited to 'tensorflow/g3doc/api_docs/python/nn.md')
-rw-r--r--tensorflow/g3doc/api_docs/python/nn.md18
1 files changed, 9 insertions, 9 deletions
diff --git a/tensorflow/g3doc/api_docs/python/nn.md b/tensorflow/g3doc/api_docs/python/nn.md
index f6873f51d2..078501cab1 100644
--- a/tensorflow/g3doc/api_docs/python/nn.md
+++ b/tensorflow/g3doc/api_docs/python/nn.md
@@ -1451,7 +1451,7 @@ equivalent formulation
### `tf.nn.softmax(logits, dim=-1, name=None)` {#softmax}
-Computes softmax activations.
+Computes log softmax activations.
For each batch `i` and class `j` we have
@@ -1485,7 +1485,7 @@ Computes log softmax activations.
For each batch `i` and class `j` we have
- logsoftmax = logits - log(reduce_sum(exp(logits), dim))
+ logsoftmax = logits - reduce_sum(exp(logits), dim)
##### Args:
@@ -1572,16 +1572,16 @@ output of `softmax`, as it will produce incorrect results.
A common use case is to have logits of shape `[batch_size, num_classes]` and
labels of shape `[batch_size]`. But higher dimensions are supported.
-##### Args:
-
+Args:
-* <b>`logits`</b>: Unscaled log probabilities of rank `r` and shape
+ logits: Unscaled log probabilities of rank `r` and shape
`[d_0, d_1, ..., d_{r-2}, num_classes]` and dtype `float32` or `float64`.
-* <b>`labels`</b>: `Tensor` of shape `[d_0, d_1, ..., d_{r-2}]` and dtype `int32` or
+ labels: `Tensor` of shape `[d_0, d_1, ..., d_{r-2}]` and dtype `int32` or
`int64`. Each entry in `labels` must be an index in `[0, num_classes)`.
- Other values will result in a loss of 0, but incorrect gradient
- computations.
-* <b>`name`</b>: A name for the operation (optional).
+ Other values will raise an exception when this op is run on CPU, and
+ return `NaN` for corresponding corresponding loss and gradient rows
+ on GPU.
+ name: A name for the operation (optional).
##### Returns: