diff options
Diffstat (limited to 'tensorflow/docs_src/get_started/mnist/beginners.md')
-rw-r--r-- | tensorflow/docs_src/get_started/mnist/beginners.md | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/tensorflow/docs_src/get_started/mnist/beginners.md b/tensorflow/docs_src/get_started/mnist/beginners.md index 193dd41b2a..38c467ddc3 100644 --- a/tensorflow/docs_src/get_started/mnist/beginners.md +++ b/tensorflow/docs_src/get_started/mnist/beginners.md @@ -180,11 +180,11 @@ You can think of it as converting tallies of evidence into probabilities of our input being in each class. It's defined as: -$$\text{softmax}(x) = \text{normalize}(\exp(x))$$ +$$\text{softmax}(evidence) = \text{normalize}(\exp(evidence))$$ If you expand that equation out, you get: -$$\text{softmax}(x)_i = \frac{\exp(x_i)}{\sum_j \exp(x_j)}$$ +$$\text{softmax}(evidence)_i = \frac{\exp(evidence_i)}{\sum_j \exp(evidence_j)}$$ But it's often more helpful to think of softmax the first way: exponentiating its inputs and then normalizing them. The exponentiation means that one more |