diff options
author | 2017-03-29 04:35:29 -0800 | |
---|---|---|
committer | 2017-03-29 05:52:19 -0700 | |
commit | 3088d3664a99e7cb81ee190f4d65f4bd10407f42 (patch) | |
tree | 388d58803020b99dba01f4eb09ed7a43303f4cfd /tensorflow/compiler/xla/service/executable.cc | |
parent | 0a5652254eee640c1f400fc76dcae394bd9206a0 (diff) |
[XLA] Move kPad from GpuElementalIrEmitter::MakeElementGenerator to ElementalIrEmitter::MakeElementGenerator
There is nothing GPU specific in GpuElementalIrEmitter::MakeElementGenerator
for kPad. Move it into the base implementation so that all subcalses have it as
an implementation.
Change: 151564674
Diffstat (limited to 'tensorflow/compiler/xla/service/executable.cc')
-rw-r--r-- | tensorflow/compiler/xla/service/executable.cc | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/tensorflow/compiler/xla/service/executable.cc b/tensorflow/compiler/xla/service/executable.cc index 6a5e904f17..ef973676ea 100644 --- a/tensorflow/compiler/xla/service/executable.cc +++ b/tensorflow/compiler/xla/service/executable.cc @@ -40,8 +40,7 @@ Executable::ExecuteOnStreams( std::vector<perftools::gputools::DeviceMemoryBase> return_values( run_options.size()); - for (tensorflow::gtl::ArraySlice<const ExecutableRunOptions>::size_type i = 0; - i < run_options.size(); ++i) { + for (size_t i = 0; i < run_options.size(); ++i) { // We cannot BlockHostUntilDone() on the already-launched executions in case // of error, since if the executions communicate, the initially launched // executions may never complete if not all executions are running. |