aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/c/eager
Commit message (Collapse)AuthorAge
* When running a native/builtin op via eager C API, automatically fill in defaultGravatar Mingsheng Hong2018-10-05
| | | | | | | | | | | | | | | | attr values that are not overridden e.g. transpose_a in the matmul op). This is required for backward compatibility (a binary built via an older version of TF should still run on a newer version of TF, where some ops may have added attrs). For non-eager graph building, the default attr values of graph ops are added by tensorflow::AddDefaultsToNodeDef(). We ran into this issue when running the same S4TF test cases via eager APIs -- some tests failed due to "missing attrs", but are fixed by this patch. PiperOrigin-RevId: 215927271
* Don't generate backward function and delete when its not necessaryGravatar Akshay Modi2018-10-01
| | | | PiperOrigin-RevId: 215288224
* Minor speed improvements to defun.Gravatar Akshay Modi2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - EncodeArg in C instead of python. - Also caches parsed device specs, and device spec hashes - Adds a common way to register python types in C. - Fastpath canonicalize function inputs when no kwargs are passed - Set the func name attr directly instead of creating an op to wrap it. - Rewrite IsAttrsHelper without caching Before: entry { name: "MicroBenchmarks.benchmark_defun_matmul_2_by_2_CPU" iters: 30000 wall_time: 101.803263028 extras { key: "examples_per_sec" value { double_value: 9822.86785562 } } } After: entry { name: "MicroBenchmarks.benchmark_defun_matmul_2_by_2_CPU" iters: 30000 wall_time: 47.2899993261 extras { key: "examples_per_sec" value { double_value: 21146.1199884 } } } PiperOrigin-RevId: 215272962
* Clean-up of function.py.Gravatar Lasse Espeholt2018-09-24
| | | | PiperOrigin-RevId: 214232622
* Allow the tape tensor to have unknown shapes.Gravatar Akshay Modi2018-09-19
| | | | | | This is done by making the TapeTensor a template rather than a concrete struct. PiperOrigin-RevId: 213700425
* Num elements fastpath for eager tensors.Gravatar Akshay Modi2018-09-17
| | | | PiperOrigin-RevId: 213377426
* Remove some dead code after migration from python to C.Gravatar Akshay Modi2018-09-17
| | | | PiperOrigin-RevId: 213372027
* Added TFE_OpSetAttrTensor() to eager C API.Gravatar Mingsheng Hong2018-09-14
| | | | | | | Also added some experimental C APIs for facilitate the use of eager C APIs in S4TF compiler. PiperOrigin-RevId: 213041780
* Allow creating a py EagerTensor that shares the underlying TensorHandle.Gravatar Akshay Modi2018-09-05
| | | | | | | | | | | | | | This is so that gradients with respect to scalars pass (see the test added in backprop_test.py). A micro benchmark just calling constant_op.constant slows down a bit - this is inevitable as we are creating a new python object. After: walltime: ~2.1 Before: walltime: ~1.47 Linear regression benchmark is pretty much unchanged. PiperOrigin-RevId: 211753801
* Added a new eager C API TFE_NewContextFromSession(), where TFE_NewContext willGravatar Mingsheng Hong2018-09-04
| | | | | | | | | | | | | | | get an owned device mgr from the input session. One use case is in S4TF, we run a graph session to enqueue a tensor into a fifo queue, and then call TFE_Execute() on a dequeue op over the same queue, as a way to transfer a tensor from TF to host (tensor tranfer in the other direction also works). To make this work, we need TFE_Context and the the TF_Session to use the same ResourceMgr object (attached to a Device, which is in turn owned by DeviceMgr), so that both can access the fifo queue resource op. PiperOrigin-RevId: 211471075
* Skip zeros call if unrequired in backprop for ↵Gravatar Akshay Modi2018-08-30
| | | | | | | | | | SparseSoftmaxCrossEntropyWithLogits See https://github.com/tensorflow/tensorflow/blob/065f9b833ffbb3b2f03d63febb186275674ba133/tensorflow/python/ops/nn_grad.py#L482 Should help with #20218 PiperOrigin-RevId: 210933185
* Merge branch 'master' into py37Gravatar Ben2018-08-26
|\
| * [C API/Eager]: Fix bug in TFE_OpSetAttrString.Gravatar Asim Shankar2018-08-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TFE_OpSetAttrString was holding on to the 'value' pointer after it returned. This bug was introduced in commit 2b0805301e4531dd7c2ed677d932f6408675460e which caused TFE_OpSetAttrString to invoke AttrBuilder& AttrBuilder::Set(StringPiece attr_name, StringPiece&& value); instead of: AttrBuilder& AttrBuilder::Set(StringPiece attr_name, T&& value) (where the latter copies 'value' when T is a StringPiece or const char* and the former aliases the memory pointed to by StringPiece). In this process, I realized that AttrBuilder::Set(StringPiece attr_name, StringPiece&& value) was never being invoked (other than in this buggy situation), so I removed it altogether. Without the changes to attr_builder.{h,cc}, the newly added test fails - complaining that "NHWC" is not a valid value for the "padding" attribute. PiperOrigin-RevId: 209017110
* | rename enableGravatar bstriner2018-08-14
| |
* | py37Gravatar bstriner2018-08-14
| |
* | py37Gravatar Ben2018-08-13
|/
* Support keep alive so we can reclaim memory in the remote case.Gravatar Akshay Modi2018-08-08
| | | | PiperOrigin-RevId: 207971672
* Allows differentiating tfe.defun functions with loops in eager mode.Gravatar Alexandre Passos2018-08-08
| | | | | | | | | | | Adopts a minimal sensible policy for step containers: starting a graident tape creates a step container; inner tapes do nothing; popping out of the outermost tape will reset that step container. This should allow us to have reasonable behavior in the presence of step-container-scoped things for a while. Ideally we'll move away from them in favor of lists but the infrastructure isn't ready yet. PiperOrigin-RevId: 207911091
* Allow setting server_def directly on TFE_Context.Gravatar Akshay Modi2018-08-03
| | | | | | | | | Any time that the server def is updated, the context is effectively "reset" by clearing all the caches. - Check that the FLR returned is not a nullptr instead of seg faulting. - Consolidate caches within the context object. PiperOrigin-RevId: 207308086
* Check if the handle is nullptr, and fail early instead of segfaulting.Gravatar Akshay Modi2018-08-02
| | | | PiperOrigin-RevId: 207176253
* Make TFE_DeleteContext not take a status, and allow TFE_DeleteTensorHandle ↵Gravatar Akshay Modi2018-07-27
| | | | | | | | to take a nullptr. None of the other TFE_Delete* functions take a status, so this makes things a little more consistent. PiperOrigin-RevId: 206374382
* Add a method to check if a tensor handle is on the host cpu.Gravatar Akshay Modi2018-07-16
| | | | PiperOrigin-RevId: 204825266
* Skip calling back into python if only 1 gradient to aggregateGravatar Akshay Modi2018-07-09
| | | | PiperOrigin-RevId: 203786896
* Support shapes for remote eager tensor handles.Gravatar Akshay Modi2018-06-28
| | | | | | | | Since we respond with the shape, all RPCs will happen sync (note that we may still hide the python overhead, since the op is still scheduled for execution via the eager executor). PiperOrigin-RevId: 202207324
* Allow dynamic specification of clusters for eager remote execution.Gravatar Akshay Modi2018-06-21
| | | | PiperOrigin-RevId: 201586130
* [eager]: Support string attributes where the value contains `\0`.Gravatar Asim Shankar2018-06-20
| | | | | | | | | | Apparently, some custom operations stuff non-printable characters in string valued attributes. This change also makes the eager C API consistent with the C API for graph construction (TF_SetAttrString and TF_SetAttrStringList). PiperOrigin-RevId: 201372089
* Allow setting server def on the eager context, and add the eager service to ↵Gravatar Akshay Modi2018-06-19
| | | | | | the grpc_tensorflow_server. PiperOrigin-RevId: 201198350
* Allow silent copies during remote execution.Gravatar Akshay Modi2018-06-11
| | | | | | This is required to do anything useful from python. PiperOrigin-RevId: 200129777
* Add EagerTensor profiler and device shape utilitiesGravatar Igor Ganichev2018-05-25
| | | | | | | | | | | | | | | | | | | | | | | | | | This change includes the following steps to make EagerTensor profiler work: - Add a PaddedShapeFn to XlaDevice::Metadata. We need a backend-independent way to get a fully-padded shape and its layout on the device. This function is set during device construction. CPU and GPU devices effectively get an identity function since they neither change the layout nor pad. TPU gets the appropriate function. - Add TFE_TensorDebugInfo struct and C API methods for it. These methods are necessary to fetch the shape and layout from under the C API to the Python level. This can be a home for more debug information later. - Make EagerTensor weak referencable. This involves adding a pointer to the list of current weak references. This addition should have negligible overhead when profiler is not used. The only operations on this field are setting it to null on construction and checking if it is null on destruction. - Adding C++ functions callable from Python to register an instance of EagerTensorProfiler and retrieve debug information for a given EagerTensor. These functions are used in the new "inspect" module. - Finally, writing the actual profiler. PiperOrigin-RevId: 198098380
* Remove _get_backward_fn and depend on _gradient_function directly.Gravatar Akshay Modi2018-05-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (_magic_gradient_function was renamed to _gradient_function) Before: entry { name: "MicroBenchmarks.benchmark_tf_gradient_forward_identity" iters: 30000 wall_time: 5.88456789653 extras { key: "examples_per_sec" value { double_value: 169936.011885 } } } After: entry { name: "MicroBenchmarks.benchmark_tf_gradient_forward_identity" iters: 30000 wall_time: 5.04853725433 extras { key: "examples_per_sec" value { double_value: 198077.175551 } } } PiperOrigin-RevId: 197972668
* Fixes issue with gradient tape when asking for the gradient of an ↵Gravatar Alexandre Passos2018-05-21
| | | | | | intermediate tensor. PiperOrigin-RevId: 197481473
* Move runtime.{h,cc,_test.cc} into core/common_runtime/eager as attr_builderGravatar Akshay Modi2018-05-17
| | | | | | | | I'm not familiar with how the CMake build is set up but from the description of the problem the dependency graph is coarser than Bazel's, so I think this should fix #18925. PiperOrigin-RevId: 197061764
* Allow for remote eager execution.Gravatar Akshay Modi2018-05-16
| | | | PiperOrigin-RevId: 196910675
* Do not differentiate integers in the eager backprop API.Gravatar Alexandre Passos2018-05-10
| | | | | | (with bugfix) PiperOrigin-RevId: 196184587
* Automated g4 rollback of changelist 195878952Gravatar Asim Shankar2018-05-10
| | | | PiperOrigin-RevId: 196127751
* Do not differentiage integers in the eager API.Gravatar Alexandre Passos2018-05-08
| | | | | | | | This is similar to the change made in: https://github.com/tensorflow/tensorflow/commit/f63750645826df65b05cad505546a86f0e347674 for backpropagation during graph construction via tf.gradients() PiperOrigin-RevId: 195878952
* Fixes to tape gradient for providing outputs and having multiple targets.Gravatar Alexandre Passos2018-04-30
| | | | PiperOrigin-RevId: 194796304
* Make TF functions work with _USE_C_SHAPES=True.Gravatar Skye Wanderman-Milne2018-04-24
| | | | | | | | | | It turns out regular functions need to manually copy handle data in addition to eager GraphModeFunctions, so I moved the C extensions to python_api.h from eager/c_api.h. This also cleans up function_test.py to assume the C API is enabled. PiperOrigin-RevId: 194158700
* Merge changes from github.Gravatar Yifei Feng2018-04-23
| | | | PiperOrigin-RevId: 194031845
* Move the guts of TFE_Execute into EagerExecuteGravatar Akshay Modi2018-04-20
| | | | PiperOrigin-RevId: 193728072
* Move the guts of TFE_Op into EagerOperationGravatar Akshay Modi2018-04-20
| | | | PiperOrigin-RevId: 193698320
* Merged commit includes the following changes:Gravatar A. Unique TensorFlower2018-04-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 193422827 by yifeif: Fix buildifier error. -- 193421691 by skyewm: Make GraphModeFunctions work with _USE_C_SHAPES=True. Tensor._handle_data is going away. This change adds special hooks for propagating the resource handle shape information through EagerTensors. -- 193421473 by A. Unique TensorFlower: Register dynamic_stitch for DT_VARIANT type. -- 193421175 by nolivia: disabling flaky tsan test -- 193420117 by nolivia: disabling flaky test in tensorflow that has no apparent culprit -- PiperOrigin-RevId: 193422827
* Avoid ToString() in Eager's TFE_Execute.Gravatar Akshay Modi2018-04-17
| | | | | | | | | | | | | | | | | | | | | Also use InlinedVector instead of std::vector for non-async path Before: Benchmark Time(ns) CPU(ns) Iterations ------------------------------------------------------------- BM_Execute/0 1895 1898 360200 Execute BM_Execute/1 1193 1942 358322 ExecuteAsync BM_ExecuteFunction/0 5812 5825 100000 ExecuteFunction BM_ExecuteFunction/1 5015 5374 100000 ExecuteFunctionAsync After: Benchmark Time(ns) CPU(ns) Iterations ------------------------------------------------------------- BM_Execute/0 1604 1607 428262 Execute BM_Execute/1 1150 1765 404821 ExecuteAsync BM_ExecuteFunction/0 5615 5626 100000 ExecuteFunction BM_ExecuteFunction/1 5111 5476 100000 ExecuteFunctionAsync PiperOrigin-RevId: 193218331
* eager: Tweak error message.Gravatar Asim Shankar2018-04-02
| | | | | | | Motivated by https://stackoverflow.com/questions/49616532/a-tensorflow-eager-gpu-error/49617069 PiperOrigin-RevId: 191334050
* Turns eager device placement on by default.Gravatar Alexandre Passos2018-03-29
| | | | | | | | | | | | Change the device policy to have silent copies, which are logged when RunMetadata tracking is enabled. In the process, changed TensorHandle to always keep its context around if it gets one. Changed TFE_TensorHandleResolve to, if necessary, copy to the CPU (since the user has no control as to whether this copy is needed by default). PiperOrigin-RevId: 190978086
* Support structured source in GradientTape.gradientGravatar Igor Ganichev2018-03-28
| | | | | | | | | | | | | | | | | Before this change, it was easy to forget [] around the source tensor. This mistake lead to GradientTape.gradient(), returning a list of Nones. Nones normally tell to the user that the source and the target are not connected via differentiable operations, which is not the source of the error in this case. Instead of adding a check that `sources` is a list of tensors, this CL adds ability to handle structured source (which includes a lone tensor), similarly to many existing TensorFlow APIs. Also, with Alex's help, it fixes a bug where repeated tensors in `sources` were not handled correctly. PiperOrigin-RevId: 190878583
* Move ExecuteNode and CopyToDevice_InternalGravatar Alexandre Passos2018-03-28
| | | | PiperOrigin-RevId: 190775681
* Moves Execute() from c_api.ccGravatar Alexandre Passos2018-03-27
| | | | PiperOrigin-RevId: 190681610
* Fix loop variable type and status propagationGravatar A. Unique TensorFlower2018-03-25
| | | | PiperOrigin-RevId: 190308776
* Moves TensorHandleCopyToDevice to TensorHandle::CopyToDevice.Gravatar Alexandre Passos2018-03-25
| | | | PiperOrigin-RevId: 190291768