diff options
author | Martin Wicke <wicke@google.com> | 2017-01-04 21:25:34 -0800 |
---|---|---|
committer | TensorFlower Gardener <gardener@tensorflow.org> | 2017-01-04 21:46:08 -0800 |
commit | 333dc32ff79af21484695157f3d141dc776f7c02 (patch) | |
tree | b379bcaa56bfa54d12ea839fb7e62ab163490743 /tensorflow/examples/udacity | |
parent | d9541696b068cfcc1fab66b03d0b8d605b64f14d (diff) |
Change arg order for {softmax,sparse_softmax,sigmoid}_cross_entropy_with_logits to be (labels, predictions), and force use of named args to avoid accidents.
Change: 143629623
Diffstat (limited to 'tensorflow/examples/udacity')
-rw-r--r-- | tensorflow/examples/udacity/2_fullyconnected.ipynb | 4 | ||||
-rw-r--r-- | tensorflow/examples/udacity/4_convolutions.ipynb | 2 | ||||
-rw-r--r-- | tensorflow/examples/udacity/6_lstm.ipynb | 2 |
3 files changed, 4 insertions, 4 deletions
diff --git a/tensorflow/examples/udacity/2_fullyconnected.ipynb b/tensorflow/examples/udacity/2_fullyconnected.ipynb index 8a845171a4..a6a206307a 100644 --- a/tensorflow/examples/udacity/2_fullyconnected.ipynb +++ b/tensorflow/examples/udacity/2_fullyconnected.ipynb @@ -271,7 +271,7 @@ " # cross-entropy across all training examples: that's our loss.\n", " logits = tf.matmul(tf_train_dataset, weights) + biases\n", " loss = tf.reduce_mean(\n", - " tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))\n", + " tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))\n", " \n", " # Optimizer.\n", " # We are going to find the minimum of this loss using gradient descent.\n", @@ -448,7 +448,7 @@ " # Training computation.\n", " logits = tf.matmul(tf_train_dataset, weights) + biases\n", " loss = tf.reduce_mean(\n", - " tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))\n", + " tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))\n", " \n", " # Optimizer.\n", " optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n", diff --git a/tensorflow/examples/udacity/4_convolutions.ipynb b/tensorflow/examples/udacity/4_convolutions.ipynb index 464d2c836e..d607dddbb2 100644 --- a/tensorflow/examples/udacity/4_convolutions.ipynb +++ b/tensorflow/examples/udacity/4_convolutions.ipynb @@ -286,7 +286,7 @@ " # Training computation.\n", " logits = model(tf_train_dataset)\n", " loss = tf.reduce_mean(\n", - " tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))\n", + " tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))\n", " \n", " # Optimizer.\n", " optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)\n", diff --git a/tensorflow/examples/udacity/6_lstm.ipynb b/tensorflow/examples/udacity/6_lstm.ipynb index 64e913acf8..7e78c5328f 100644 --- a/tensorflow/examples/udacity/6_lstm.ipynb +++ b/tensorflow/examples/udacity/6_lstm.ipynb @@ -576,7 +576,7 @@ " logits = tf.nn.xw_plus_b(tf.concat_v2(outputs, 0), w, b)\n", " loss = tf.reduce_mean(\n", " tf.nn.softmax_cross_entropy_with_logits(\n", - " logits, tf.concat_v2(train_labels, 0)))\n", + " labels=tf.concat_v2(train_labels, 0), logits=logits))\n", "\n", " # Optimizer.\n", " global_step = tf.Variable(0)\n", |