aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/python/eager/tape.py
Commit message (Collapse)AuthorAge
* Make defun work under distributed strategies.Gravatar Igor Ganichev2018-10-09
| | | | | | | | | | | | | The core of the change is have the gradient tape capture distributed variables instead of plain ResourceVariables. In other words, we move the distribution awareness from defun down to tape and rely on distributed variable magic to provide us with the right variable at runtime. In tower context, we always watch the container (e.g. MirroredVariable). In cross tower context, we always watch all the components. PiperOrigin-RevId: 216430530
* Support not automatically watching (trainable) accessed variables in ↵Gravatar Tom Hennigan2018-09-07
| | | | | | | | | GradientTape. For more complex use cases this allows fine grained control over what is tracked by the tape. PiperOrigin-RevId: 211948236
* Only watch tensors on the current tape rather than all of them.Gravatar Tom Hennigan2018-09-01
| | | | | | | | | | | | | | | | | | | | | This allows fine grained control over recording in some cases, for example the following where we want d2y but not d2z: x1 = tf.Variable(2.0, trainable=False) x2 = tf.Variable(2.0, trainable=False) with tf.GradientTape() as tape1: with tf.GradientTape() as tape2: tape1.watch(x1) tape2.watch([x1, x2]) y = x1 ** 3 z = x2 ** 2 dy, dz = tape2.gradient([y, z], [x1, x2]) d2y, d2z = tape1.gradient([dy, dz], [x1, x2]) assert d2z is None PiperOrigin-RevId: 211206506
* Methods to stop and reset tf.GradientTape()Gravatar Alexandre Passos2018-05-17
| | | | PiperOrigin-RevId: 196995160
* Fix the threading model of gradient tapes.Gravatar Alexandre Passos2018-01-08
| | | | | | | | | | | | | The set of tapes needs to be global to enable multithreaded programming (when it's natural for tensors to cross threads during reduction operations) but each thread still needs to be able to locally pause recording while it does gradient-related bookkeeping (like custom gradients or initialization). Also removes a mutex from the thread-local structure since it's unnecessary as we're always holding the GIL while calling across the python-c boundary unless we explicitly release it. PiperOrigin-RevId: 181246570
* Add persistent GradientTape supportGravatar Igor Ganichev2017-11-22
| | | | | | | | | | Added two simple tests for persistent tapes and did a manual test that calling "del" on gradient tape releases all tensors. Also: - Add missing Py_DECREF to error case in MakeTensorIDList - Make a couple error messages more descriptive PiperOrigin-RevId: 176718477
* Tape stack in C++ instead of python.Gravatar Alexandre Passos2017-11-14
| | | | PiperOrigin-RevId: 175704617
* Moves tape.watch_variable to C. Prequel to moving the tape stack to C.Gravatar Alexandre Passos2017-11-13
| | | | PiperOrigin-RevId: 175531148
* Improvement to benchmark.Gravatar Alexandre Passos2017-11-10
| | | | PiperOrigin-RevId: 175346269
* Moves imperative_grad to CGravatar Alexandre Passos2017-11-10
| | | | | | Neutral-to-positive on all benchmarks. Also reduces overhead of should_record. PiperOrigin-RevId: 175057104
* Ports the eager gradient tape to C.Gravatar Alexandre Passos2017-10-13
| | | | | | The tape stack is still in python as is the backprop code. PiperOrigin-RevId: 172151189
* eager: Fix issue with custom_gradients and implicit_gradients.Gravatar Asim Shankar2017-10-13
| | | | | | While at it, clean up some dead code/comments in tape.py PiperOrigin-RevId: 172143125
* Removing side outputs from tape code.Gravatar Alexandre Passos2017-10-09
| | | | | | They belong better in future function objects (simplifies tape move to C) PiperOrigin-RevId: 171603665
* Move EagerTensor from python to C.Gravatar A. Unique TensorFlower2017-09-30
| | | | PiperOrigin-RevId: 170617321
* TF Eager: Avoid creating some unnecessary zeros during backprop.Gravatar A. Unique TensorFlower2017-09-18
| | | | PiperOrigin-RevId: 169195496
* Certain ops don't need eager gradients to keep their inputs / outputs alive.Gravatar Alexandre Passos2017-09-15
| | | | PiperOrigin-RevId: 168864350
* Eager gradient tape doesn't keep tensors alive.Gravatar Alexandre Passos2017-09-14
| | | | PiperOrigin-RevId: 168782341
* Resurrects autograd-free eager gradients.Gravatar Alexandre Passos2017-09-12
| | | | PiperOrigin-RevId: 168448557
* TFE: Improves the interfaces of tape.watch_variable() and implicit_grad().Gravatar Ali Yahya2017-09-11
| | | | | | | | tape.watch_variable() replaces tape.watch() and now is called on ResourceVariable objects instead of their underlying handles. implicit_grad() now returns a list of (gradient, variable) pairs to be consistent with tf.Optimizer's interface. PiperOrigin-RevId: 168232055
* Adds tape.watch_variable(v) where v is any ResourceVariable.Gravatar Ali Yahya2017-08-30
| | | | PiperOrigin-RevId: 167074863
* Fixes eager higher-order gradients for ops whose gradient function uses ↵Gravatar Alexandre Passos2017-08-30
| | | | | | their outputs. PiperOrigin-RevId: 167042517
* Fix bug with second derivatives in eager mode.Gravatar Alexandre Passos2017-08-29
| | | | | | Improper wrapping and unwrapping of tensors lead to tracing being dropped. PiperOrigin-RevId: 166910119
* ResourceVariables are compatible with implicit_grad.Gravatar Alexandre Passos2017-08-18
| | | | PiperOrigin-RevId: 165772481
* Makes tape.watch() work with ResourceVariables.Gravatar Ali Yahya2017-08-18
| | | | | | To this end, also adds a property, `device`, to TensorNode. PiperOrigin-RevId: 165726368
* Moves tensor_id() from tape.py to framework/ops.py; breaks dependency cycle ↵Gravatar Ali Yahya2017-08-17
| | | | | | in subsequent CLs. PiperOrigin-RevId: 165632053
* Make HloAliasAnalysis updatable after changes to the HLO graph.Gravatar Mark Heffernan2017-08-10
| | | | | | | As part of this change make HloAliasAnalysis a thinner layer which basically only holds a map from HloValue to HloBuffer and vice versa. PiperOrigin-RevId: 164923041
* Merged commit includes the following changes:Gravatar A. Unique TensorFlower2017-08-11
| | | | | | | | | | | | 164923041 by meheff: Make HloAliasAnalysis updatable after changes to the HLO graph. As part of this change make HloAliasAnalysis a thinner layer which basically only holds a map from HloValue to HloBuffer and vice versa. -- PiperOrigin-RevId: 164923041
* Experimental C and Python APIs to invoke TensorFlow kernels on concrete values.Gravatar Alexandre Passos2017-08-10
PiperOrigin-RevId: 164902588