aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src/programmers_guide/low_level_intro.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/docs_src/programmers_guide/low_level_intro.md')
-rw-r--r--tensorflow/docs_src/programmers_guide/low_level_intro.md6
1 files changed, 3 insertions, 3 deletions
diff --git a/tensorflow/docs_src/programmers_guide/low_level_intro.md b/tensorflow/docs_src/programmers_guide/low_level_intro.md
index a8cc0feae3..05709ad10a 100644
--- a/tensorflow/docs_src/programmers_guide/low_level_intro.md
+++ b/tensorflow/docs_src/programmers_guide/low_level_intro.md
@@ -312,7 +312,7 @@ the same input. @{tf.layers$Layers} are the preferred way to add trainable
parameters to a graph.
Layers package together both the variables and the operations that act
-on them, . For example a
+on them. For example a
[densely-connected layer](https://developers.google.com/machine-learning/glossary/#fully_connected_layer)
performs a weighted sum across all inputs
for each output and applies an optional
@@ -495,7 +495,7 @@ good. Here's what we got; your own output will almost certainly differ:
[ 0.10527515]]
```
-### loss
+### Loss
To optimize a model, you first need to define the loss. We'll use the mean
square error, a standard loss for regression problems.
@@ -521,7 +521,7 @@ TensorFlow provides
[**optimizers**](https://developers.google.com/machine-learning/glossary/#optimizer)
implementing standard optimization algorithms. These are implemented as
sub-classes of @{tf.train.Optimizer}. They incrementally change each
-variable in order to minimizethe loss. The simplest optimization algorithm is
+variable in order to minimize the loss. The simplest optimization algorithm is
[**gradient descent**](https://developers.google.com/machine-learning/glossary/#gradient_descent),
implemented by @{tf.train.GradientDescentOptimizer}. It modifies each
variable according to the magnitude of the derivative of loss with respect to