aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc/tutorials/mnist/mnist.py
diff options
context:
space:
mode:
authorGravatar Vijay Vasudevan <vrv@google.com>2015-11-12 16:47:36 -0800
committerGravatar Vijay Vasudevan <vrv@google.com>2015-11-12 16:47:36 -0800
commitd50565b35e886e7c3a201ea2f088790ed4b28de4 (patch)
treefa6bfce7311467e6c03ec314bb7947a49df7dd8c /tensorflow/g3doc/tutorials/mnist/mnist.py
parent4dffee7f62d81ec9173aba1b0ef6b96e47f8037c (diff)
TensorFlow: Upstream changes from afternoon.
Changes: - Ptrdiff -> DenseIndex change by @jiayq - Fix to scoping the logging in logging.py by @dga - Improvement to Conv2DBackpropFilter on CPU by Andy - Remove lookup table wrappers for the time being (wasn't in our public API yet) by Yukata - Add a check similar to numpy to make sure the user isn't in the tensorflow src directory by @vrv - More changes for python 3 compat by @girving - Make dropout preserve shape info from input (@mrry) - Significant speed improvements by @zheng-xq to BFC allocator to bring on par (CPU overhead-wise) to the region allocator. Make BFC allocator the default now that it's working well for a variety of models. - Fix a bunch of typos reported by users (@vrv) - Enable concat for bfloat16 on GPU by Ashish. Base CL: 107733123
Diffstat (limited to 'tensorflow/g3doc/tutorials/mnist/mnist.py')
-rw-r--r--tensorflow/g3doc/tutorials/mnist/mnist.py2
1 files changed, 1 insertions, 1 deletions
diff --git a/tensorflow/g3doc/tutorials/mnist/mnist.py b/tensorflow/g3doc/tutorials/mnist/mnist.py
index 64be52293a..925debac6e 100644
--- a/tensorflow/g3doc/tutorials/mnist/mnist.py
+++ b/tensorflow/g3doc/tutorials/mnist/mnist.py
@@ -91,7 +91,7 @@ def loss(logits, labels):
# be a 1.0 in the entry corresponding to the label).
batch_size = tf.size(labels)
labels = tf.expand_dims(labels, 1)
- indices = tf.expand_dims(tf.range(0, batch_size, 1), 1)
+ indices = tf.expand_dims(tf.range(batch_size), 1)
concated = tf.concat(1, [indices, labels])
onehot_labels = tf.sparse_to_dense(
concated, tf.pack([batch_size, NUM_CLASSES]), 1.0, 0.0)