| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 216475683
|
|
|
|
| |
PiperOrigin-RevId: 216471178
|
|
|
|
|
|
| |
Support peephole and num_proj as well.
PiperOrigin-RevId: 216467578
|
|
|
|
|
|
| |
No support in any of the backends, and not yet exposed through XlaBuilder.
PiperOrigin-RevId: 216465753
|
|
|
|
| |
PiperOrigin-RevId: 216463491
|
|
|
|
| |
PiperOrigin-RevId: 216463443
|
|
|
|
|
|
| |
values.
PiperOrigin-RevId: 216461637
|
|
|
|
|
|
|
| |
The CFG treats lambdas as ordinary expressions. The activity analysis ensures that variables masked by the lambda's arguments are not being tracked.
Note: lambdas do not allow direct modification (we exclude indirect mutation via function or methods).
PiperOrigin-RevId: 216456682
|
|
|
|
| |
PiperOrigin-RevId: 216455772
|
|
|
|
| |
PiperOrigin-RevId: 216455250
|
|
|
|
| |
PiperOrigin-RevId: 216453979
|
|
|
|
| |
PiperOrigin-RevId: 216452496
|
|
|
|
| |
PiperOrigin-RevId: 216452447
|
|
|
|
|
|
| |
No functional change.
PiperOrigin-RevId: 216451881
|
|
|
|
| |
PiperOrigin-RevId: 216451263
|
|
|
|
|
|
|
| |
So that when resolving some global data, we don't have to worry whether
"Resolve" is going to mutate the real data.
PiperOrigin-RevId: 216448145
|
|
|
|
|
|
|
|
| |
We have a 1-element thunk sequence if we're not copying. That's still two
thunks and hlo profiling gets confused if it sees two thunks for the same
instruction and one of them claims to be the whole instruction.
PiperOrigin-RevId: 216448063
|
|
|
|
|
|
| |
from proto and verifying it with HloVerifier.
PiperOrigin-RevId: 216447947
|
|
|
|
| |
PiperOrigin-RevId: 216447412
|
|
|
|
| |
PiperOrigin-RevId: 216446750
|
|
|
|
| |
PiperOrigin-RevId: 216445964
|
|
|
|
| |
PiperOrigin-RevId: 216443201
|
|
|
|
|
|
| |
445998d7ac4e5d3c50411d377e3b50e960d2d6c2
PiperOrigin-RevId: 216442983
|
|
|
|
| |
PiperOrigin-RevId: 216442906
|
|
|
|
| |
PiperOrigin-RevId: 216442569
|
|
|
|
|
|
| |
This avoids a copy.
PiperOrigin-RevId: 216437329
|
|
|
|
| |
PiperOrigin-RevId: 216432358
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The core of the change is have the gradient tape capture
distributed variables instead of plain ResourceVariables.
In other words, we move the distribution awareness from defun
down to tape and rely on distributed variable magic to provide us
with the right variable at runtime.
In tower context, we always watch the container (e.g. MirroredVariable).
In cross tower context, we always watch all the components.
PiperOrigin-RevId: 216430530
|
|
|
|
|
|
| |
attributes, set the attributes of all the contained variables. This fixes a bug that tf.train.init_from_checkpoint doesn't overwrite the initialization values correctly for TPUMirroredVariable.
PiperOrigin-RevId: 216429476
|
|
|
|
| |
PiperOrigin-RevId: 216425002
|
|
|
|
| |
PiperOrigin-RevId: 216424512
|
|
|
|
| |
PiperOrigin-RevId: 216422334
|
|
|
|
|
|
| |
Otherwise we'd emit a CAS loop.
PiperOrigin-RevId: 216421161
|
|
|
|
| |
PiperOrigin-RevId: 216419983
|
|
|
|
|
|
|
|
| |
The existing code triggers parts of the TensorFlow runtime that may not have been fully
initialized at the time the parameters are evaluated. Lifting into a lambda and invoking
the lambda inside the test method will achieve the proper order.
PiperOrigin-RevId: 216419757
|
|
|
|
| |
PiperOrigin-RevId: 216419037
|
|
|
|
|
|
| |
construct the state. This is part of a larger refactoring which removes the reliance on the deprecated Scope.created field.
PiperOrigin-RevId: 216418556
|
|
|
|
| |
PiperOrigin-RevId: 216418324
|
|
|
|
| |
PiperOrigin-RevId: 216416117
|
|
|
|
|
|
|
|
|
|
|
|
| |
This simple has a kernel that runs on every element of the updates tensor,
figure out the right indices to perform the update, and applies it with an
atomic operation.
Currently we emit a CAS for plain (i.e. non-add) updates, which is inefficient.
Also TuplePointsToAnalysis doesn't know that it should alias the operand and
output buffers of a scatter, which would avoid a copy.
PiperOrigin-RevId: 216412467
|
|
|
|
| |
PiperOrigin-RevId: 216412380
|
|
|
|
| |
PiperOrigin-RevId: 216410913
|
|
|
|
| |
PiperOrigin-RevId: 216400726
|
|
|
|
|
|
|
|
|
|
|
|
| |
function calls.
E.g. register_kl calls would trigger such warnings. This spam was exacerbated
by the fact that it happens before logging is initialized, so it is dumped
prominently to STDERR. Worse yet it also happened no matter whether the user
imported any symbols from tf.distributions or not as the relevant code is
executed when you import TensorFlow.
PiperOrigin-RevId: 216396036
|
|
|
|
| |
PiperOrigin-RevId: 216395709
|
|
|
|
| |
PiperOrigin-RevId: 216392908
|
|
|
|
| |
PiperOrigin-RevId: 216392772
|
|
|
|
|
|
|
|
|
|
| |
Specifically:
- renames from def_function
- returns an object with well-defined methods
- doesn't force-retrace twice
- uses the python descriptor API ( https://docs.python.org/3/howto/descriptor.html )
to remove the need for a tf.method
PiperOrigin-RevId: 216388957
|
|
|
|
| |
PiperOrigin-RevId: 216386450
|
|
|
|
| |
PiperOrigin-RevId: 216385202
|