diff options
author | Frank Chen <frankchn@google.com> | 2017-07-13 14:51:47 -0700 |
---|---|---|
committer | TensorFlower Gardener <gardener@tensorflow.org> | 2017-07-13 14:55:38 -0700 |
commit | a0ffaf3caa0234653035a692858606c7bdacd63b (patch) | |
tree | 6a6c1c220143e5fef04b834ff70064d34c3f6eec /tensorflow/compiler/tests/tensor_array_ops_test.py | |
parent | 8ad81fd88faa3facf206518064d421ad5ece4a5c (diff) |
Merge changes from github.
END_PUBLIC
---
Commit fe5338177 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Go: Update generated wrapper functions for TensorFlow ops.
PiperOrigin-RevId: 161727345
---
Commit c65f69119 authored by Eugene Brevdo<ebrevdo@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Factor out DenseUpdate ops into dense_update_functor build dep.
Also add support for complex types.
PiperOrigin-RevId: 161726749
---
Commit 9a172989e authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Update ops-related pbtxt files.
PiperOrigin-RevId: 161726324
---
Commit fd5530d6e authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
adding bazel-toolchains repo to workspace. This repo will be necessary for remote execution (specifically for cross OS compilation)
PiperOrigin-RevId: 161719899
---
Commit 71c4ec8ed authored by Derek Murray<mrry@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Add a mechanism for switching between multiple iterators by feeding a handle.
With this change, you can do the following:
1. Fetch a string handle for any iterator, by evaluating the result of
`Iterator.string_handle()`.
2. Define an `Iterator` object based on a `tf.string` placeholder handle.
3. Feed the placeholder using an evaluated string handle to use a particular
iterator in a particular step.
Concretely, this allows you to define two iterators for a training dataset and
a test dataset, and choose which one to use on a per-run basis:
```python
train_iterator = tf.contrib.data.Dataset(...).make_one_shot_iterator()
train_iterator_handle = sess.run(train_iterator.string_handle())
test_iterator = tf.contrib.data.Dataset(...).make_one_shot_iterator()
test_iterator_handle = sess.run(test_iterator.string_handle())
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.contrib.data.Iterator.from_string_handle(
handle, train_iterator.output_types)
next_element = iterator.get_next()
loss = f(next_element)
train_loss = sess.run(loss, feed_dict={handle: train_iterator_handle})
test_loss = sess.run(loss, feed_dict={handle: test_iterator_handle})
```
PiperOrigin-RevId: 161719836
---
Commit 6d6dda807 authored by Kay Zhu<kayzhu@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[TF:XLA] Fix an issue where plugin/Executor backend is used by default when TF
is built from source with XLA support. See Github issue #11122.
The priority of the executor backend is set to be higher than the default (50)
and CPUs (<100), and is therefore selected as the default when tf.device is not
explicitly specified.
PiperOrigin-RevId: 161717173
---
Commit 6b28eb084 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Rename HloLocation to HloPosition, to avoid ambiguity with MemoryLocation.
PiperOrigin-RevId: 161716528
---
Commit 8e7f57371 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Expose tf.contrib.nn.rank_sampled_softmax_loss.
PiperOrigin-RevId: 161716450
---
Commit e424d209a authored by Peter Hawkins<phawkins@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[TF:XLA] Use a more numerically accurate formulation of ResourceApplyRMSProp.
PiperOrigin-RevId: 161706120
---
Commit 45a58d378 authored by Skye Wanderman-Milne<skyewm@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Introduce Python-only extensions to the C API
Implements an incomplete version of Operation._add_control_input()
using a new extension to make sure the plumbing works.
This also adds header guards to c_api_internal.h, which were missing. For some reason the missing guards caused problems in the cmake build even though there doesn't appear to be any #include cycles.
PiperOrigin-RevId: 161705859
---
Commit 4f5433634 authored by Jonathan Hseu<jhseu@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Rename TpuEstimator to TPUEstimator and TpuConfig to TPUConfig to follow PEP8
naming conventions.
PiperOrigin-RevId: 161704561
---
Commit 38180d7bb authored by Yun Peng<pcloudy@google.com>
Committed by gunan<gunan@google.com>:
Disable nn_test on Windows (#11445)
---
Commit e1de7a1b0 authored by Yun Peng<pcloudy@google.com>
Committed by gunan<gunan@google.com>:
Windows Bazel Build: Build TensorFlow with wrapper-less CROSSTOOL (#11454)
---
Commit c9d03a568 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Add tf.contrib.nn.rank_sampled_softmax_loss, a variant of tf.nn.sampled_softmax_loss that has been shown to improve rank loss. Paper: https://arxiv.org/abs/1707.03073
PiperOrigin-RevId: 161702455
---
Commit 9aa0dcbf2 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Add shape check for MakeQuantileSummariesOp.
PiperOrigin-RevId: 161698801
---
Commit 9c4da4a24 authored by vhasanov<KyotoSunshine@users.noreply.github.com>
Committed by Frank Chen<frankchn@gmail.com>:
Deleted unnecessary repetition of the same text. (#11459)
The same text was repeated two times. I deleted the repetition.
---
Commit d1e3cadda authored by DimanNe<dimanne@gmail.com>
Committed by drpngx<drpngx@users.noreply.github.com>:
Fix linking options issued by bazel in oorder to make gradients register (#11449)
---
Commit 8605f7ab8 authored by Taehoon Lee<me@taehoonlee.com>
Committed by Frank Chen<frankchn@gmail.com>:
Fix typos (#11444)
---
Commit 7c1fe9068 authored by Karl Lessard<karllessard@users.noreply.github.com>
Committed by Frank Chen<frankchn@gmail.com>:
[Java] Add base classes and utilities for operation wrappers. (#11188)
* Add base classes and utilities for operation wrappers.
* Rename Input interface to Operand
* Introduce changes after code review
---
Commit 2195db6d8 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Remove unused flag: xla_hlo_graph_for_compute_constant
PiperOrigin-RevId: 161686867
---
Commit a72fc31bc authored by Martin Wicke<martin.wicke@gmail.com>
Committed by Martin Wicke<martin.wicke@gmail.com>:
Remove tabs. Unassign contrib/framework.
---
Commit 6e74bd65a authored by Martin Wicke<martin.wicke@gmail.com>
Committed by Martin Wicke<martin.wicke@gmail.com>:
Add CODEOWNERS
Added what we know about contrib mainly, and some well-separated components.
---
Commit de546d066 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
BUILD cleanup in tensorflow/compiler/...
PiperOrigin-RevId: 161679855
---
Commit 576c7b1ec authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
BEGIN_PUBLIC
Automated g4 rollback of changelist 161218103
PiperOrigin-RevId: 161868747
Diffstat (limited to 'tensorflow/compiler/tests/tensor_array_ops_test.py')
-rw-r--r-- | tensorflow/compiler/tests/tensor_array_ops_test.py | 56 |
1 files changed, 30 insertions, 26 deletions
diff --git a/tensorflow/compiler/tests/tensor_array_ops_test.py b/tensorflow/compiler/tests/tensor_array_ops_test.py index b3067be51d..f277314352 100644 --- a/tensorflow/compiler/tests/tensor_array_ops_test.py +++ b/tensorflow/compiler/tests/tensor_array_ops_test.py @@ -139,7 +139,7 @@ class TensorArrayTest(xla_test.XLATestCase): ta = tensor_array_ops.TensorArray( dtype=tf_dtype, tensor_array_name="foo", size=3) - # Unpack a matrix into vectors + # Unpack a matrix into vectors. w1 = ta.unstack(convert([[1.0, 1.1], [2.0, 2.1], [3.0, 3.1]])) r0 = w1.read(0) r1 = w1.read(1) @@ -180,7 +180,7 @@ class TensorArrayTest(xla_test.XLATestCase): convert = _make_converter(tf_dtype) - # Split an empty vector + # Split an empty vector. lengths = constant_op.constant([0, 0, 0]) w0 = ta.split(convert([]), lengths=lengths) r0 = w0.read(0) @@ -192,7 +192,7 @@ class TensorArrayTest(xla_test.XLATestCase): self.assertAllEqual(convert([]), d1) self.assertAllEqual(convert([]), d2) - # Split a vector + # Split a vector. ta = tensor_array_ops.TensorArray( dtype=tf_dtype, tensor_array_name="foo", size=3) lengths = constant_op.constant([1, 1, 1]) @@ -206,7 +206,7 @@ class TensorArrayTest(xla_test.XLATestCase): self.assertAllEqual(convert([2.0]), d1) self.assertAllEqual(convert([3.0]), d2) - # Split a matrix + # Split a matrix. ta = tensor_array_ops.TensorArray( dtype=tf_dtype, tensor_array_name="foo", size=3) lengths = constant_op.constant([1, 1, 1]) @@ -319,27 +319,31 @@ class TensorArrayTest(xla_test.XLATestCase): ta = tensor_array_ops.TensorArray( dtype=dtypes.float32, tensor_array_name="foo", size=3) - # Test writing the wrong datatype + # Test writing the wrong datatype. with self.assertRaisesOpError( "TensorArray dtype is float but op has dtype int32"): ta.write(-1, np.int32(7)).flow.eval() def testTensorArrayReadWrongIndexOrDataTypeFails(self): - with self.test_session(), self.test_scope(): - ta = tensor_array_ops.TensorArray( - dtype=dtypes.float32, tensor_array_name="foo", size=3) - - w0 = ta.write(0, [[4.0, 5.0]]) - - # Test reading wrong datatype - r0_bad = gen_data_flow_ops._tensor_array_read_v3( - handle=w0.handle, index=0, dtype=dtypes.float64, flow_in=w0.flow) - with self.assertRaisesOpError( - "TensorArray dtype is float but op has dtype double."): - r0_bad.eval() - - # Test reading from a different index than the one we wrote to - w0.read(1) + # Find two different floating point types, create an array of + # the first type, but try to read the other type. + if len(self.float_types) > 1: + dtype1 = self.float_types[0] + dtype2 = self.float_types[1] + with self.test_session(), self.test_scope(): + ta = tensor_array_ops.TensorArray( + dtype=dtype1, tensor_array_name="foo", size=3) + + w0 = ta.write(0, [[4.0, 5.0]]) + + # Test reading wrong datatype. + r0_bad = gen_data_flow_ops._tensor_array_read_v3( + handle=w0.handle, index=0, dtype=dtype2, flow_in=w0.flow) + with self.assertRaisesOpError("TensorArray dtype is "): + r0_bad.eval() + + # Test reading from a different index than the one we wrote to + w0.read(1) def testTensorArraySplitIncompatibleShapesFails(self): with self.test_session(), self.test_scope(): @@ -487,7 +491,7 @@ class TensorArrayTest(xla_test.XLATestCase): r0 = w1.read(0) s0 = w1.concat() - # Test gradient accumulation between read(0), pack(), and concat() + # Test gradient accumulation between read(0), pack(), and concat(). with ops.control_dependencies([p0, r0, s0]): grad_r = gradients_impl.gradients( ys=[p0, r0, s0], @@ -536,7 +540,7 @@ class TensorArrayTest(xla_test.XLATestCase): r0_1 = w.read(0) r1 = w.read(1) - # Test combined gradients + aggregation of read(0) + # Test combined gradients + aggregation of read(0). grad = gradients_impl.gradients( ys=[r0, r0_1, r1], xs=[value], @@ -744,7 +748,7 @@ class TensorArrayTest(xla_test.XLATestCase): grad_b_t, = session.run([grad_b]) self.assertAllEqual(grad_b_t, g0) - # Test gradients calculated jointly + # Test gradients calculated jointly. joint_grad_a_t, joint_grad_b_t = session.run([grad_a, grad_b]) self.assertAllEqual(joint_grad_a_t, g0) self.assertAllEqual(joint_grad_b_t, g0) @@ -877,7 +881,7 @@ class TensorArrayTest(xla_test.XLATestCase): x = constant_op.constant([2.0, 3.0]) w = ta.unstack(x) r0 = w.read(0) - # calculate (dr0/dx0, dr0/dx1). since r0 = x0, gradients are (1, 0). + # Calculate (dr0/dx0, dr0/dx1). since r0 = x0, gradients are (1, 0). grad_r0 = gradients_impl.gradients(ys=[r0], xs=[x], grad_ys=[1.0]) grad_r0_vals = session.run(grad_r0)[0] self.assertAllEqual(grad_r0_vals, [1.0, 0.0]) @@ -927,7 +931,7 @@ class TensorArrayTest(xla_test.XLATestCase): r0 = w.read(1) r1 = w.read(8) - # Test combined gradients + aggregation of read(0) + # Test combined gradients + aggregation of read(0). grad = gradients_impl.gradients( ys=[r0, r1], xs=[value], grad_ys=[[2.0, 3.0], [4.0, 5.0]]) read_vals, grad_vals = session.run([[r0, r1], grad]) @@ -951,7 +955,7 @@ class TensorArrayTest(xla_test.XLATestCase): w = ta.unstack(values) g = w.gather(indices) - # Test combined gradients + aggregation of read(0) + # Test combined gradients + aggregation of read(0). grad = gradients_impl.gradients( ys=[g], xs=[values], grad_ys=[[[2.0, 3.0], [4.0, 5.0]]]) g_vals, grad_vals = session.run([[g], grad]) |