| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
| |
flow conversion.
PiperOrigin-RevId: 216370439
|
|
|
|
| |
PiperOrigin-RevId: 216370329
|
|
|
|
| |
PiperOrigin-RevId: 216370193
|
|
|
|
|
|
|
|
|
|
| |
Previously we pre-reserverd the visit state based on the number of
instructions but then started to index it with the instruction unique ID
what can be larger then the instruction count. This resulted in some
very expensive re-allocations what can be eliminated by reserving the
correctly sized buffer.
PiperOrigin-RevId: 216369849
|
|
|
|
| |
PiperOrigin-RevId: 216369081
|
|
|
|
| |
PiperOrigin-RevId: 216368178
|
|
|
|
| |
PiperOrigin-RevId: 216367867
|
|
|
|
|
|
| |
tensorflowtestcase.
PiperOrigin-RevId: 216363450
|
|
|
|
|
|
|
|
|
|
| |
estimators. This is required for TF hub use cases where users might send in new feature columns to old model code. Implemented this support by making V2 feature columns support the V1 API. This is needed temporarily and would definitely be removed by TF 2.0, possibly earlier depending on what guarantees are provided by TF hub.
The only case we don't allow here is mixing in V2 shared embedding columns with V1 Feature columns. V2 Shared FC's depend on a SharedEmbeddingState manager that would have to be passed in to the various API's and there wasn't really a very clean way to make that work.
Mixing V2 feature columns with V1 shared embedding columns is fine though and along with all other combinations
PiperOrigin-RevId: 216359041
|
|
|
|
|
|
|
| |
If the graph contains not constant array with strings it fails because the
array's size can't be estimated.
PiperOrigin-RevId: 216356162
|
|
|
|
| |
PiperOrigin-RevId: 216354906
|
|
|
|
| |
PiperOrigin-RevId: 216350134
|
|
|
|
| |
PiperOrigin-RevId: 216323343
|
|
|
|
| |
PiperOrigin-RevId: 216315110
|
|
|
|
| |
PiperOrigin-RevId: 216309111
|
|
|
|
|
|
| |
function to utils; Refactor EstimateSize() from memory_optimizer.cc to utils; some small changes for readability improvement
PiperOrigin-RevId: 216307257
|
|
|
|
| |
PiperOrigin-RevId: 216303340
|
|
|
|
| |
PiperOrigin-RevId: 216299809
|
|
|
|
|
|
|
|
| |
- This CL intruduces input/output alias config in HLO module that allows any HLO pass to configure it. Once the alias_config is set, each backend needs to follow the contract during execution time to make sure the input and output are indeed aliased.
- Copy insertion / buffer assignment and alias analysis has been updated to correctly honor the config and avoid any possible liveness interference.
PiperOrigin-RevId: 216299501
|
|
|
|
|
|
|
| |
call for better xprof tracing. Also annotate synchronous op execution with the session-run id (or step_id) as metadata leveraging the support introduced in cl/215985561.
This should enable highlighting the duration of a Session::Run and all the ops that ran in it for visualizing latency regressions in the case of CPU inference.
PiperOrigin-RevId: 216284682
|
|
|
|
| |
PiperOrigin-RevId: 216280913
|
|
|
|
|
|
|
|
|
| |
Previously, we were passing the first (graph-level) seed for both the
graph-level and op-level seeds when creating a C++ dataset. This
change passes the op-level seed to the appropriate point, and adds a test
for the behavior with graph-but-not-op-level seeds.
PiperOrigin-RevId: 216280641
|
|
|
|
| |
PiperOrigin-RevId: 216280197
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
are made according to https://github.com/tensorflow/community/pull/16.
I am keeping a few symbols deprecated not mentioned in the doc:
tf.diag - it seems best to keep it next to tf.linalg.diag, so that the two are easy to compare and decide which one to use. The plan is to rename tf.diag to tf.tensor_diag.
tf.is_nan - similar to tf.is_inf, tf.is_finite, tf.is_numeric_tensor which are all getting deprecated and replaced by symbols in tf.debugging.
tf.string_to_number - other string endpoints in root namespace are getting deprecated: for e.g. tf.substr, tf.string_join.
tf.dequantize - all quantization ops should be under tf.quantize. I probably missed this one.
tf.check_numerics - similar to other debugging ops that are getting moved to tf.debugging.
tf.squared_difference - moved to tf.math namespace and not as popular as some other math ops such as tf.add to justify keeping endpoint in root.
tf.decode_raw - similar to other ops such as tf.decode_csv that are getting moved to tf.io.decode_csv.
PiperOrigin-RevId: 216278010
|
|
|
|
| |
PiperOrigin-RevId: 216270497
|
|
|
|
| |
PiperOrigin-RevId: 216270385
|
|
|
|
| |
PiperOrigin-RevId: 216265275
|
|
|
|
| |
PiperOrigin-RevId: 216263039
|
|
|
|
| |
PiperOrigin-RevId: 216260575
|
|
|
|
| |
PiperOrigin-RevId: 216260437
|
|
|
|
| |
PiperOrigin-RevId: 216260216
|
|
|
|
|
|
|
| |
Use the ArgDef::type field when available for propagating
the output types from a given unsupported operator.
PiperOrigin-RevId: 216257741
|
|
|
|
| |
PiperOrigin-RevId: 216256115
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The calculation of a spatial coordinate in the kernel and activations is not
dependent on which part of the contracted dimension (input feature) we are in.
Rather than nesting the loops, the loops can be siblings:
- One loop over spatial dimensions
- One loop over the input feature group
This reduces the nesting depth which makes the code a little more readable and
might be slightly faster due work invariant in the spatial loop getting hoisted
out.
PiperOrigin-RevId: 216255839
|
|
|
|
|
|
|
|
| |
was created
with an input_signature.
PiperOrigin-RevId: 216253122
|
|\
| |
| |
| | |
PiperOrigin-RevId: 216253115
|
| |
| |
| |
| | |
PiperOrigin-RevId: 216252980
|
| |
| |
| |
| |
| |
| | |
Add a variant of CustomCall which specifies arbitrary layout constraints on the operands and result. The existing non-layout-constrained CustomCall is changed to have no layout preference and can now be assigned arbitrary layouts by layout assignment.
PiperOrigin-RevId: 216249615
|
| |
| |
| |
| | |
PiperOrigin-RevId: 216248418
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This changes the behavior of randomness-introducing datasets (`tf.data.Dataset.shuffle()`, `tf.data.experimental.shuffle_and_repeat()`, and `tf.data.experimental.RandomDataset`). Previously, when you used the same `tf.data.Dataset` object multiple times in a pipeline (e.g. by zipping two datasets derived from the same randomness-introducing dataset) *and* you did not specify an explicit `seed`, the implementation would choose different non-deterministic seeds for each use of the `Dataset` object.
With this change, the seed will be chosen once per `Dataset` (technically, once per `Dataset`-`Graph` combination, due to the vagaries of capturing state in `Dataset.make_one_shot_iterator()`), which means that all uses of the same dataset object will observe the same sequence of values.
This change also revealed a small bug in how `Dataset.shuffle(..., reshuffle_each_iteration=False)` is serialized when an explicit seed is specified. The op-level seed was dropped, which could lead to non-deterministic behavior. This change fixes that issue by forwarding the op-level seed to the appropriate place.
PiperOrigin-RevId: 216248013
|
| |
| |
| |
| | |
PiperOrigin-RevId: 216247929
|
|\ \
| | |
| | |
| | | |
PiperOrigin-RevId: 216245934
|
|\ \ \
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 216245301
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Doesn't attempt to deal with cases where we might have already generated
the functiondef for the parent function as in that case we cannot easily
modify the forward pass.
PiperOrigin-RevId: 216243224
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 216242862
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
benchmarking. At the moment, it returns a default config with only Grappler dependency optimizer disabled. Many benchmarks wrap the subgraph they want to time in control_flow_ops.group() to avoid including the overhead of copying the output back to the Python client in the measurement. In the graph, this only adds a control dependency between the subgraph output and the fetch node, which in turn (often) causes the dependency optimizer to turn all nodes in the graph into no-ops.
PiperOrigin-RevId: 216242463
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
sqrt(v + epsilon**2) and changed flag name accordingly.
PiperOrigin-RevId: 216240045
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
mechanism, since the meta optimizer only checks if it has been cancelled before running each sub-optimizer. We can add cancellation to each sub-optimizer if necessary.
PiperOrigin-RevId: 216234262
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 216230391
|
|\ \ \ \
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 216228494
|