diff options
author | Derek Murray <mrry@google.com> | 2016-02-05 18:19:07 -0800 |
---|---|---|
committer | Manjunath Kudlur <keveman@gmail.com> | 2016-02-06 08:46:41 -0800 |
commit | 3fa9676bbc41826689e9b0e11a45e3fbdceae258 (patch) | |
tree | dcb787db2d252ccced947e0d5cf2f7733c931668 /tensorflow/python/ops/gradients_test.py | |
parent | 241698b6ba6cd9b13d606a9e4603baa4f33891f2 (diff) |
Consolidate the device function and device string handling in `tf.device()`.
The effect of this CL is to treat `with tf.device(device_name):` as
supplying a device function that *merges* `device_name` into the
device of ops created in that scope. (Merging is defined by
`tensorflow.python.framework.device.merge_device()`: essentially, for
each field defined in `device_name`, the merge function sets an op's
device to that if it has not already been set.) This makes it possible
to compose device blocks that set different parts of a device, and use
device strings in composition with device functions.
A secondary effect of this CL is that it causes `with
tf.device(None):` to interoperate properly with device functions. As
with other `tf.Graph` contexts, entering a `with tf.device(None):` now
has the effect of ignoring all currently set device functions in the
outer context.
This CL makes some breaking changes to corner cases in the
`tf.device()`, `tf.Graph`, `tf.Operation`, and `tf.Tensor` APIs:
* Within a `with tf.device(device_string):` scope, the given device
string will now be *merged* into the device for ops created in that
scope. See the implementation of
`tensorflow.python.framework.device.merge_device()` for details.
Previously, device strings were maintained in a single "default
device" field, rather than a stack, so device strings from outer
contexts would be completely ignored. To obtain the previous
behavior, use `with tf.device(None), tf.device(device_string):`
instead.
* Within a `with tf.Graph.device(None):` scope, no device functions
from the outer context will be executed.
Previously, the `None` applied only to the device string, and all
device functions would be applied unconditionally.
* The `tf.Graph.get_default_device()` method is removed, because it no
longer has a well-defined meaning. To create a no-op device scope,
you can simply use `with tf.device(""):`.
* The `tf.Operation.device` and `tf.Tensor.device` properties now
return an empty string when no device has been set for an op. This
makes it easier to write code like `with tf.device(op.device):`,
which is robust to `op` having or not having a device (in which case
the scope should be a no-op).
Change: 114003979
Diffstat (limited to 'tensorflow/python/ops/gradients_test.py')
-rw-r--r-- | tensorflow/python/ops/gradients_test.py | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/tensorflow/python/ops/gradients_test.py b/tensorflow/python/ops/gradients_test.py index e5a828e7bb..e194dabee4 100644 --- a/tensorflow/python/ops/gradients_test.py +++ b/tensorflow/python/ops/gradients_test.py @@ -167,7 +167,7 @@ class GradientsTest(test_util.TensorFlowTestCase): with g.device("/gpu:0"): wx = math_ops.matmul(w, x) gw = gradients.gradients(wx, [w], colocate_gradients_with_ops=True)[0] - self.assertEquals("/gpu:0", gw.device) + self.assertDeviceEqual("/gpu:0", gw.device) def testColocateGradientsWithAggregation(self): with ops.Graph().as_default() as g: @@ -180,9 +180,9 @@ class GradientsTest(test_util.TensorFlowTestCase): with g.device("/gpu:0"): z = wx + wy gw1 = gradients.gradients(z, [w], colocate_gradients_with_ops=True)[0] - self.assertEquals("/gpu:1", gw1.device) + self.assertDeviceEqual("/gpu:1", gw1.device) gw2 = gradients.gradients(z, [w], colocate_gradients_with_ops=False)[0] - self.assertEquals(None, gw2.device) + self.assertDeviceEqual(None, gw2.device) def testBoundaryStop(self): # Test that we don't differentiate 'x'. The gradient function for 'x' is |