| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
This is to match the existing behavior of tf.cond.
PiperOrigin-RevId: 216534084
|
|
|
|
| |
PiperOrigin-RevId: 216533613
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the calling graph.
This change makes a subtle difference to the behavior of existing
programs that create multiple iterators. Previously, one-shot
iterators would not inherit the graph seed, and so their values would
be non-deterministic (unless explicit seeds were set). After this
change, an iterator will inherit its seed from the outer
graph. Multiple one-shot iterators created from the same dataset will
inherit different seeds, matching the semantics of creating multiple
ops with the same graph seed.
PiperOrigin-RevId: 216532256
|
|
|
|
|
|
| |
reliance on importing tensorflow in the generated code.
PiperOrigin-RevId: 216528047
|
|
|
|
| |
PiperOrigin-RevId: 216500702
|
|
|
|
| |
PiperOrigin-RevId: 216495091
|
|
|
|
| |
PiperOrigin-RevId: 216483746
|
|
|
|
| |
PiperOrigin-RevId: 216483744
|
|
|
|
|
|
|
| |
The CFG treats lambdas as ordinary expressions. The activity analysis ensures that variables masked by the lambda's arguments are not being tracked.
Note: lambdas do not allow direct modification (we exclude indirect mutation via function or methods).
PiperOrigin-RevId: 216456682
|
|
|
|
| |
PiperOrigin-RevId: 216446750
|
|
|
|
| |
PiperOrigin-RevId: 216442569
|
|
|
|
| |
PiperOrigin-RevId: 216432358
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The core of the change is have the gradient tape capture
distributed variables instead of plain ResourceVariables.
In other words, we move the distribution awareness from defun
down to tape and rely on distributed variable magic to provide us
with the right variable at runtime.
In tower context, we always watch the container (e.g. MirroredVariable).
In cross tower context, we always watch all the components.
PiperOrigin-RevId: 216430530
|
|
|
|
| |
PiperOrigin-RevId: 216424512
|
|
|
|
|
|
|
|
| |
The existing code triggers parts of the TensorFlow runtime that may not have been fully
initialized at the time the parameters are evaluated. Lifting into a lambda and invoking
the lambda inside the test method will achieve the proper order.
PiperOrigin-RevId: 216419757
|
|
|
|
|
|
| |
construct the state. This is part of a larger refactoring which removes the reliance on the deprecated Scope.created field.
PiperOrigin-RevId: 216418556
|
|
|
|
| |
PiperOrigin-RevId: 216412380
|
|
|
|
|
|
|
|
|
|
|
|
| |
function calls.
E.g. register_kl calls would trigger such warnings. This spam was exacerbated
by the fact that it happens before logging is initialized, so it is dumped
prominently to STDERR. Worse yet it also happened no matter whether the user
imported any symbols from tf.distributions or not as the relevant code is
executed when you import TensorFlow.
PiperOrigin-RevId: 216396036
|
|
|
|
| |
PiperOrigin-RevId: 216395709
|
|
|
|
|
|
|
|
|
|
| |
Specifically:
- renames from def_function
- returns an object with well-defined methods
- doesn't force-retrace twice
- uses the python descriptor API ( https://docs.python.org/3/howto/descriptor.html )
to remove the need for a tf.method
PiperOrigin-RevId: 216388957
|
|
|
|
|
|
|
| |
_SUMMARY_WRITER_INIT_COLLECTION_NAME collections from the summaryV2
implementation. Replacing them with global variables.
PiperOrigin-RevId: 216383152
|
|
|
|
| |
PiperOrigin-RevId: 216381943
|
|
|
|
|
|
| |
flow conversion.
PiperOrigin-RevId: 216370439
|
|
|
|
| |
PiperOrigin-RevId: 216370329
|
|
|
|
| |
PiperOrigin-RevId: 216370193
|
|
|
|
| |
PiperOrigin-RevId: 216368178
|
|
|
|
|
|
| |
tensorflowtestcase.
PiperOrigin-RevId: 216363450
|
|
|
|
|
|
|
|
|
|
| |
estimators. This is required for TF hub use cases where users might send in new feature columns to old model code. Implemented this support by making V2 feature columns support the V1 API. This is needed temporarily and would definitely be removed by TF 2.0, possibly earlier depending on what guarantees are provided by TF hub.
The only case we don't allow here is mixing in V2 shared embedding columns with V1 Feature columns. V2 Shared FC's depend on a SharedEmbeddingState manager that would have to be passed in to the various API's and there wasn't really a very clean way to make that work.
Mixing V2 feature columns with V1 shared embedding columns is fine though and along with all other combinations
PiperOrigin-RevId: 216359041
|
|
|
|
| |
PiperOrigin-RevId: 216323343
|
|
|
|
|
|
|
|
|
| |
Previously, we were passing the first (graph-level) seed for both the
graph-level and op-level seeds when creating a C++ dataset. This
change passes the op-level seed to the appropriate point, and adds a test
for the behavior with graph-but-not-op-level seeds.
PiperOrigin-RevId: 216280641
|
|
|
|
| |
PiperOrigin-RevId: 216280197
|
|
|
|
| |
PiperOrigin-RevId: 216270497
|
|
|
|
| |
PiperOrigin-RevId: 216260437
|
|
|
|
| |
PiperOrigin-RevId: 216260216
|
|
|
|
| |
PiperOrigin-RevId: 216256115
|
|
|
|
|
|
|
|
| |
was created
with an input_signature.
PiperOrigin-RevId: 216253122
|
|
|
|
|
|
|
|
|
|
| |
This changes the behavior of randomness-introducing datasets (`tf.data.Dataset.shuffle()`, `tf.data.experimental.shuffle_and_repeat()`, and `tf.data.experimental.RandomDataset`). Previously, when you used the same `tf.data.Dataset` object multiple times in a pipeline (e.g. by zipping two datasets derived from the same randomness-introducing dataset) *and* you did not specify an explicit `seed`, the implementation would choose different non-deterministic seeds for each use of the `Dataset` object.
With this change, the seed will be chosen once per `Dataset` (technically, once per `Dataset`-`Graph` combination, due to the vagaries of capturing state in `Dataset.make_one_shot_iterator()`), which means that all uses of the same dataset object will observe the same sequence of values.
This change also revealed a small bug in how `Dataset.shuffle(..., reshuffle_each_iteration=False)` is serialized when an explicit seed is specified. The op-level seed was dropped, which could lead to non-deterministic behavior. This change fixes that issue by forwarding the op-level seed to the appropriate place.
PiperOrigin-RevId: 216248013
|
|
|
|
|
|
|
|
| |
Doesn't attempt to deal with cases where we might have already generated
the functiondef for the parent function as in that case we cannot easily
modify the forward pass.
PiperOrigin-RevId: 216243224
|
|
|
|
| |
PiperOrigin-RevId: 216242862
|
|
|
|
|
|
| |
benchmarking. At the moment, it returns a default config with only Grappler dependency optimizer disabled. Many benchmarks wrap the subgraph they want to time in control_flow_ops.group() to avoid including the overhead of copying the output back to the Python client in the measurement. In the graph, this only adds a control dependency between the subgraph output and the fetch node, which in turn (often) causes the dependency optimizer to turn all nodes in the graph into no-ops.
PiperOrigin-RevId: 216242463
|
|
|
|
| |
PiperOrigin-RevId: 216230391
|
|\
| |
| |
| | |
PiperOrigin-RevId: 216217509
|
| |
| |
| |
| | |
PiperOrigin-RevId: 216212953
|
| |
| |
| |
| |
| |
| | |
existing namespace.
PiperOrigin-RevId: 216211286
|
| |
| |
| |
| | |
PiperOrigin-RevId: 216211279
|
| |
| |
| |
| | |
PiperOrigin-RevId: 216210141
|
| |
| |
| |
| |
| |
| | |
shared resources are very similar to global variables functionally and they are initialized at the same time but since workers are only waiting for global variables being initialized, there is a race condition that sometimes the shared resource is not ready.
PiperOrigin-RevId: 216208679
|
| |
| |
| |
| |
| |
| | |
Will be helpful for specifying serving signatures when exporting SavedModels
PiperOrigin-RevId: 216207284
|
| |
| |
| |
| |
| |
| | |
`MapAndBatchDataset` whose user-provided functions have the property that each output argument take its value directly from an input argument (e.g. `lambda x, y: y, x`). This specialization can produce the result without having to schedule the function using the executor.
PiperOrigin-RevId: 216206232
|
| |
| |
| |
| | |
PiperOrigin-RevId: 216203408
|