| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
Change a couple of fscanf-style format strings to use the format macro
constants defined in cinttypes. This quashes -Wformat.
PiperOrigin-RevId: 216545604
|
|
|
|
| |
PiperOrigin-RevId: 216536298
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
// Replace operations of the form:
// x = stack((a_0, a_1, ..., a_{n-1}), axis=k)[:,...,i,...]
// with
// a_i
// when the strided slice index `i` is applied in the k'th axis.
//
// Similarly, replace operations of the form:
// x = stack((a_0, a_1, ..., a_{n-1}), axis=k)[:,...,i:i+1,...]
// with
// expand_dims(a_i, axis=k)
//
PiperOrigin-RevId: 216535346
|
|
|
|
|
|
| |
This is to match the existing behavior of tf.cond.
PiperOrigin-RevId: 216534084
|
|
|
|
| |
PiperOrigin-RevId: 216533613
|
|
|
|
|
|
|
|
|
|
|
| |
This change complements the existing `InstantiateOptions::executor_type`
option, which takes precedence over the attr if both are provided. It
enables the choice of executor to be separated from both the calling
op implementation and the function definition, which simplifies the
use of custom executors in operations that take a function as an attr
(e.g.) `tf.data` and the functional control-flow ops.
PiperOrigin-RevId: 216532778
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the calling graph.
This change makes a subtle difference to the behavior of existing
programs that create multiple iterators. Previously, one-shot
iterators would not inherit the graph seed, and so their values would
be non-deterministic (unless explicit seeds were set). After this
change, an iterator will inherit its seed from the outer
graph. Multiple one-shot iterators created from the same dataset will
inherit different seeds, matching the semantics of creating multiple
ops with the same graph seed.
PiperOrigin-RevId: 216532256
|
|
|
|
|
|
| |
Sometimes the actual number of outputs is dictated by one of the attributes of the NodeDef.
PiperOrigin-RevId: 216530696
|
|
|
|
|
|
| |
reliance on importing tensorflow in the generated code.
PiperOrigin-RevId: 216528047
|
|
|
|
| |
PiperOrigin-RevId: 216525613
|
|
|
|
|
|
|
|
|
|
| |
RemoveInstructionAndUnusedOperands
If the caller explicitly asks to remove a side effceting instruction
(e.g. all-reduce) then we should respect it instead of silently ignoring
the request.
PiperOrigin-RevId: 216505133
|
|
|
|
| |
PiperOrigin-RevId: 216500702
|
|
|
|
|
| |
absl::flat_hash_set have better performance than a std::unordered_set, which can improve overall compile time.
PiperOrigin-RevId: 216498767
|
|
|
|
|
|
|
| |
Previosuly we emitted xla::Add what isn't supported by some XLA backend
on PRED types.
PiperOrigin-RevId: 216497939
|
|
|
|
| |
PiperOrigin-RevId: 216495091
|
|
|
|
| |
PiperOrigin-RevId: 216483746
|
|
|
|
| |
PiperOrigin-RevId: 216483744
|
|
|
|
| |
PiperOrigin-RevId: 216479972
|
|
|
|
| |
PiperOrigin-RevId: 216475683
|
|
|
|
| |
PiperOrigin-RevId: 216471178
|
|
|
|
|
|
| |
Support peephole and num_proj as well.
PiperOrigin-RevId: 216467578
|
|
|
|
|
|
| |
No support in any of the backends, and not yet exposed through XlaBuilder.
PiperOrigin-RevId: 216465753
|
|
|
|
| |
PiperOrigin-RevId: 216463491
|
|
|
|
| |
PiperOrigin-RevId: 216463443
|
|
|
|
|
|
| |
values.
PiperOrigin-RevId: 216461637
|
|
|
|
|
|
|
| |
The CFG treats lambdas as ordinary expressions. The activity analysis ensures that variables masked by the lambda's arguments are not being tracked.
Note: lambdas do not allow direct modification (we exclude indirect mutation via function or methods).
PiperOrigin-RevId: 216456682
|
|
|
|
| |
PiperOrigin-RevId: 216455772
|
|
|
|
| |
PiperOrigin-RevId: 216455250
|
|
|
|
| |
PiperOrigin-RevId: 216453979
|
|
|
|
| |
PiperOrigin-RevId: 216452496
|
|
|
|
| |
PiperOrigin-RevId: 216452447
|
|
|
|
|
|
| |
No functional change.
PiperOrigin-RevId: 216451881
|
|
|
|
| |
PiperOrigin-RevId: 216451263
|
|
|
|
|
|
|
| |
So that when resolving some global data, we don't have to worry whether
"Resolve" is going to mutate the real data.
PiperOrigin-RevId: 216448145
|
|
|
|
|
|
|
|
| |
We have a 1-element thunk sequence if we're not copying. That's still two
thunks and hlo profiling gets confused if it sees two thunks for the same
instruction and one of them claims to be the whole instruction.
PiperOrigin-RevId: 216448063
|
|
|
|
|
|
| |
from proto and verifying it with HloVerifier.
PiperOrigin-RevId: 216447947
|
|
|
|
| |
PiperOrigin-RevId: 216447412
|
|
|
|
| |
PiperOrigin-RevId: 216446750
|
|
|
|
| |
PiperOrigin-RevId: 216445964
|
|
|
|
| |
PiperOrigin-RevId: 216443201
|
|
|
|
|
|
| |
445998d7ac4e5d3c50411d377e3b50e960d2d6c2
PiperOrigin-RevId: 216442983
|
|
|
|
| |
PiperOrigin-RevId: 216442906
|
|
|
|
| |
PiperOrigin-RevId: 216442569
|
|
|
|
|
|
| |
This avoids a copy.
PiperOrigin-RevId: 216437329
|
|
|
|
| |
PiperOrigin-RevId: 216432358
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The core of the change is have the gradient tape capture
distributed variables instead of plain ResourceVariables.
In other words, we move the distribution awareness from defun
down to tape and rely on distributed variable magic to provide us
with the right variable at runtime.
In tower context, we always watch the container (e.g. MirroredVariable).
In cross tower context, we always watch all the components.
PiperOrigin-RevId: 216430530
|
|
|
|
|
|
| |
attributes, set the attributes of all the contained variables. This fixes a bug that tf.train.init_from_checkpoint doesn't overwrite the initialization values correctly for TPUMirroredVariable.
PiperOrigin-RevId: 216429476
|
|
|
|
| |
PiperOrigin-RevId: 216425002
|
|
|
|
| |
PiperOrigin-RevId: 216424512
|
|
|
|
| |
PiperOrigin-RevId: 216422334
|