aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/python/training
diff options
context:
space:
mode:
authorGravatar Vinu Rajashekhar <vinuraja@google.com>2017-04-10 16:19:27 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2017-04-10 17:35:22 -0700
commitd34eec7ec3b604b9f34dc0fc262577719514fcde (patch)
tree326c84471daf2cea901c85a24575ae7f4c73a833 /tensorflow/python/training
parent72c023d3967a3218cd3d830ce6e57f7c4d87a18c (diff)
Does a deep copy of the tensors output from GraphRunner::Run(...)
The current GraphRunner::Run(...) outputs tensors produced from running the Executor on the graph, but these tensors are actually owned by the allocator from the device created for the Run(...), which could be deleted along with the device. The deep copy allows the ownership to be transferred to the global static cpu_allocator(). Before, the allocator was always a global cpu_allocator(), but with a recent change there is an option to tie allocations to a memory limited allocator per-session. Change: 152756520
Diffstat (limited to 'tensorflow/python/training')
0 files changed, 0 insertions, 0 deletions