aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/core/common_runtime/copy_tensor.cc
Commit message (Collapse)AuthorAge
* Support nested variants in CopyHostToDevice and CopyDeviceToHost.Gravatar Saurabh Saxena2018-09-27
| | | | PiperOrigin-RevId: 214853860
* [tf.data] Add `tf.contrib.data.Optional` support to `Structure`.Gravatar Derek Murray2018-09-23
| | | | | | | | | | | This change switches `tf.contrib.data.Optional` to use a `Structure` class to represent the structure of its value, instead of `output_types`, `output_shapes`, and `output_classes` properties. It adds support for nesting `Optional` objects and representing their structure. This change also makes a modification to the `Structure` class: `Structure.is_compatible_with(x)` now takes another `Structure` as the `x` argument, instead of a value. This makes it easier to work with nested structures (where we might not have a value readily available), and better matches the interface of other `is_compatible_with()` methods (e.g. in `tf.TensorShape` and `tf.DType`). Finally, in the process of making this change, I observed possible crash-failures when a DT_VARIANT tensor containing another DT_VARIANT tensor is copied between CPU and GPU. This change "fixes" the immediate problem by raising an UnimplementedError, but more work will be necessary to support the full range of use cases. PiperOrigin-RevId: 214198993
* [TF] Variant improvements.Gravatar Eugene Brevdo2018-09-11
| | | | | | | | | | | | | | | | | | 1. Change Variant Decode to accept VariantTensorData (non-ref). This should allow some optimization in the future. In the meantime it means removing the variant.h include from tensor.h, since variant_encode_decode.h now relies on tensor.h and variant.h now relies on that. It also means we found a bunch of places where tensor.proto.h, variant.h, and mutex.h were being imported through tensor.h (along with a bunch of other crap); so now we directly import them in order to compile. 2. Move Variant registry to use TypeIndex instead of a TypeName string; this should speed up registry lookups. PiperOrigin-RevId: 212478896
* [tf.data] Add support for copying `Optional` variants to/from GPU.Gravatar Derek Murray2018-08-05
| | | | PiperOrigin-RevId: 207490563
* Add GPUOptions::num_dev_to_dev_copy_streams to allow creation ofGravatar A. Unique TensorFlower2018-06-28
| | | | | | | | | | | | | more than one device-to-device copy stream per GPU device. This is an experimental feature that will have no effect unless copy operations explicitly request a stream other than 0, which currently does not occur anywhere in a standard build. Eventually it may be of benefit in the presence of multiple bi-directional concurrent data copies. PiperOrigin-RevId: 202354513
* Cleaning up tracing code.Gravatar A. Unique TensorFlower2018-04-30
| | | | PiperOrigin-RevId: 194768567
* Remove nonfunctional and accidentally pessimizing value category casts.Gravatar A. Unique TensorFlower2017-11-20
| | | | | | | | | | | | | | The conditional expression only has ONE fixed value category. Because in the code as written the two operands are of different value categories, the result is in fact a prvalue, i.e. a copy. This seems unintended, and we should simply preserve the existing lvalue. If we do want to allow moving, we need multiple statements: if (num == 1) { f(std::move(copier)); } else { f(copier); } PiperOrigin-RevId: 176414503
* Support non-scalar variant device copy.Gravatar Eugene Brevdo2017-11-13
| | | | PiperOrigin-RevId: 175546097
* Internal Variant API allowing registering Variants to be copied from/to GPU.Gravatar Eugene Brevdo2017-10-03
| | | | | | | | | | Adds a test in the variant_op_copy_test. Modifies the base GPUDevice to use this registry if it sees a singleton variant. Modifies the rendezvous manager to do the same. PiperOrigin-RevId: 170908757
* Fix some ClangTidy warnings in third_party/tensorflow/core/common_runtime.Gravatar Derek Murray2017-04-21
| | | | Change: 153861629
* Add fallback path for device->device copies that copies via the host.Gravatar A. Unique TensorFlower2016-10-14
| | | | Change: 136164533
* A few micro-optimizations / cleanup in tensorflow/core.Gravatar A. Unique TensorFlower2016-10-07
| | | | Change: 135518672
* Fix ~63 ClangTidy - Performance findings in TensorFlow.Gravatar Vincent Vanhoucke2016-08-31
| | | | Change: 131891101
* A series of changes to significantly reduce the number of allocationsGravatar A. Unique TensorFlower2016-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | done by models distributed across many devices. A small microbenchmark model that runs two banks (A and B) of 30 nodes with a 30x30 full shuffle between them, where each of the nodes in A and in B run with one node on each of the 30 devices (so 30*29+30+30, or ~930 separate RPCs) was showing ~111,000 allocations per iteration of the graph. With the changes here, this is now down to ~64,300 allocations per iteration. Changes include: o DeviceContext::CopyDeviceTensorToCPU and related helper routines: use StringPiece instead of const string& for the tensor name (avoids creating a string in some cases where the caller only has a StringPiece available). o Change some Rendezvous and BaseRemoteRendezvous interfaces to take a 'const Rendezvous::ParsedKey& key', rather than 'const string& key'. In many cases, the callers were already having to parse the key into a ParsedKey, and so we were doing the parsing multiple times at different levels as we processed receiving or sending of a tensor. This reduces the number of times that we parse a key as it flows from a Send node through to a Recv node on another worker. o Changed Rendezvous::ParsedKey so that it makes a copy of the underlying full key, and then uses StringPiece objects to point into this copy for the src_device, dst_device, and edge_name pieces. This turns 3 string allocations into 1 per Rendezvous::ParseKey call. o Added new StringPiece Rendezvous::ParsedKey::FullKey() accessor to return a StringPiece for the underlying full key, and used that in a few places (mostly logging) where that is useful. o In many places, used std::move(function_variable) when assigning to an instance variable. This eliminates a very large number of excess std::function allocations/initializations (~56000 of the baseline allocations were related to std::function setup or cloning, and this is now down to ~11000 after this cl). o In the RPC-based remote workers (StubbyRemoteWorker and GrpcRemoteWorker), changed the code path in RecvTensorAsync to avoid creation of a std::function with 6 arguments unless necessary. There are three cases now handled separately: (a) We're not logging, and we didn't make a copy of the request that we need to free: just use the passed in 'StatusCallback done' object directly, without creating a wrapper std::function object at all (b) We're not logging, but we made a copy of the request that we need to free: we create a simple wrapper std::function that invokes the passed in 'done' callback, and then frees the req_copy request copy object. (c) We're logging: we create the std::function object with all the necessary state to log when the recv has finished. o Changed DeviceMgr::LookupDevice to take a StringPiece, rather than a const string&, and changed the hash table to use StringPiece keys. This allows clients that just have a StringPiece device name in their hand to avoid a string creation to lookup the Device* object. o Changed ExecutorState to use a specialized TaggedNodeReadyQueue that internally uses a gtl::InlinedVector<TaggedNode, 16>, rather than using a std::deque<TaggedNode> for keeping track of nodes ready to execute. This is faster because it avoids allocations entirely if the ready node queue doesn't get bigger than 16, and inlined vectors are generally faster than std::deque, at a minor risk of using more memory if this queue grows to very large numbers of ready nodes (mostly imaginable only in pathological graphs). o In ExecutorState::Process, allocated a single ExecutorState::AsyncState object to keep track of all the state we need to preserve for an asynchronously executed node, rather than keeping this state implicitly via a very large number of arguments to a lamda function. o Added new atomic std::atomic<bool> status_is_ok_ in BaseRemoteRendezvous. This allows us to avoid acquiring the lock when we just want to check if the status is non-OK in BaseRemoteRendezvous::Send and BaseRemoteRendezvous::ValidateDevices. o In GraphMgr::RunAllDone, changed assignment of args.runner to avoid one extra level of std::function indirection (binding the function directly to the ThreadPool::Schedule routine, rather than creating an intermediate lambda function that invokes this inside the body of the lambda. o Added freelist of RpcRecvTensorCall objects in third_party/tensorflow/core/distributed_runtime/rpc/rpc_rendezvous_mgr.cc o Changed third_party/tensorflow/core/framework/rendezvous.cc to keep the hashtable of Item* objects keyed by uint64 (hash of the tensor name), rather than the full-string tensor name. Collisions in the 64-bit hash space should basically never happen. o Sped up DeviceNameUtils::ParseFullName by optimizing for the common ordering of parts of /job, /replica, /task, /device. The parsing code was general enough to handle any order, but did so by comparing the prefixes 4, 3, 2, and 1 times, respectively, rather than 1, 1, 1, and 1 times. o Sped up DeviceNameUtils::SplitDeviceName to avoid extra string copies. Change: 125991891
* Update copyright for 3p/tf/core.Gravatar A. Unique TensorFlower2016-06-02
| | | | Change: 123900938
* Removing initialization_done in copy_tensor.cc.Gravatar A. Unique TensorFlower2016-06-02
| | | | Change: 123860431
* Use cc_binary rather than cc_library to reduce size of native library in APK ↵Gravatar A. Unique TensorFlower2016-01-29
| | | | | | from 5.5mb to 3.2mb (compressed). Change: 113369407
* Unbreak license testGravatar Geoffrey Irving2016-01-24
| | | | Change: 112903292
* Disentangle the GPU code from the CPU code. This means a few things:Gravatar Josh Levenberg2016-01-22
* The "core_cpu_internal" build target no longer includes files from the common_runtime/gpu/ directory. * tensorflow/core internal targets instead can get access to those headers via the "gpu_runtime" target. * The class "CopyTensor" is introduced. It lives in common_runtime/ but supports registration of copy functions so the "gpu_runtime" target can add a GPU->GPU copy ability if it is linked in. This registration should make it easier to add more device types in the future. * The "core_cpu" and "core_cpu_internal" build targets no longer reference GPUUtil::CopyViaDMA; rendezvous_mgr uses CopyTensor instead. Also the "copy_tensor" build target was not needed. Change: 112821119