| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 216453979
|
|
|
|
| |
PiperOrigin-RevId: 216452496
|
|
|
|
| |
PiperOrigin-RevId: 216443201
|
|
|
|
| |
PiperOrigin-RevId: 216424512
|
|
|
|
| |
PiperOrigin-RevId: 216410913
|
|
|
|
| |
PiperOrigin-RevId: 216400726
|
|
|
|
| |
PiperOrigin-RevId: 216395709
|
|
|
|
| |
PiperOrigin-RevId: 216392772
|
|
|
|
| |
PiperOrigin-RevId: 216381943
|
|
|
|
| |
PiperOrigin-RevId: 216370193
|
|
|
|
| |
PiperOrigin-RevId: 216369081
|
|
|
|
| |
PiperOrigin-RevId: 216354906
|
|
|
|
| |
PiperOrigin-RevId: 216309111
|
|
|
|
|
|
| |
function to utils; Refactor EstimateSize() from memory_optimizer.cc to utils; some small changes for readability improvement
PiperOrigin-RevId: 216307257
|
|
|
|
| |
PiperOrigin-RevId: 216299809
|
|
|
|
|
|
|
| |
call for better xprof tracing. Also annotate synchronous op execution with the session-run id (or step_id) as metadata leveraging the support introduced in cl/215985561.
This should enable highlighting the duration of a Session::Run and all the ops that ran in it for visualizing latency regressions in the case of CPU inference.
PiperOrigin-RevId: 216284682
|
|
|
|
| |
PiperOrigin-RevId: 216280913
|
|
|
|
|
|
|
|
|
| |
Previously, we were passing the first (graph-level) seed for both the
graph-level and op-level seeds when creating a C++ dataset. This
change passes the op-level seed to the appropriate point, and adds a test
for the behavior with graph-but-not-op-level seeds.
PiperOrigin-RevId: 216280641
|
|
|
|
| |
PiperOrigin-RevId: 216280197
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
are made according to https://github.com/tensorflow/community/pull/16.
I am keeping a few symbols deprecated not mentioned in the doc:
tf.diag - it seems best to keep it next to tf.linalg.diag, so that the two are easy to compare and decide which one to use. The plan is to rename tf.diag to tf.tensor_diag.
tf.is_nan - similar to tf.is_inf, tf.is_finite, tf.is_numeric_tensor which are all getting deprecated and replaced by symbols in tf.debugging.
tf.string_to_number - other string endpoints in root namespace are getting deprecated: for e.g. tf.substr, tf.string_join.
tf.dequantize - all quantization ops should be under tf.quantize. I probably missed this one.
tf.check_numerics - similar to other debugging ops that are getting moved to tf.debugging.
tf.squared_difference - moved to tf.math namespace and not as popular as some other math ops such as tf.add to justify keeping endpoint in root.
tf.decode_raw - similar to other ops such as tf.decode_csv that are getting moved to tf.io.decode_csv.
PiperOrigin-RevId: 216278010
|
|
|
|
| |
PiperOrigin-RevId: 216260575
|
|
|
|
| |
PiperOrigin-RevId: 216256115
|
|\
| |
| |
| | |
PiperOrigin-RevId: 216253115
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This changes the behavior of randomness-introducing datasets (`tf.data.Dataset.shuffle()`, `tf.data.experimental.shuffle_and_repeat()`, and `tf.data.experimental.RandomDataset`). Previously, when you used the same `tf.data.Dataset` object multiple times in a pipeline (e.g. by zipping two datasets derived from the same randomness-introducing dataset) *and* you did not specify an explicit `seed`, the implementation would choose different non-deterministic seeds for each use of the `Dataset` object.
With this change, the seed will be chosen once per `Dataset` (technically, once per `Dataset`-`Graph` combination, due to the vagaries of capturing state in `Dataset.make_one_shot_iterator()`), which means that all uses of the same dataset object will observe the same sequence of values.
This change also revealed a small bug in how `Dataset.shuffle(..., reshuffle_each_iteration=False)` is serialized when an explicit seed is specified. The op-level seed was dropped, which could lead to non-deterministic behavior. This change fixes that issue by forwarding the op-level seed to the appropriate place.
PiperOrigin-RevId: 216248013
|
| |
| |
| |
| | |
PiperOrigin-RevId: 216247929
|
| |
| |
| |
| |
| |
| |
| |
| | |
Doesn't attempt to deal with cases where we might have already generated
the functiondef for the parent function as in that case we cannot easily
modify the forward pass.
PiperOrigin-RevId: 216243224
|
| |
| |
| |
| |
| |
| | |
mechanism, since the meta optimizer only checks if it has been cancelled before running each sub-optimizer. We can add cancellation to each sub-optimizer if necessary.
PiperOrigin-RevId: 216234262
|
| |
| |
| |
| | |
PiperOrigin-RevId: 216217887
|
|\ \
| | |
| | |
| | | |
PiperOrigin-RevId: 216217509
|
| | |
| | |
| | |
| | |
| | |
| | | |
`MapAndBatchDataset` whose user-provided functions have the property that each output argument take its value directly from an input argument (e.g. `lambda x, y: y, x`). This specialization can produce the result without having to schedule the function using the executor.
PiperOrigin-RevId: 216206232
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 216205396
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 216201732
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
benchmarks.
original runtime: 4.83492736816 secs
w/ cache runtime: 2.19033999443 secs
PiperOrigin-RevId: 216195286
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 216187878
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 216000752
|
|\ \ \
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215995215
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215989259
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
stateless_random_uniform now take minval+maxval and handles ints,
and stateless_normal/stateless_truncated_normal take mean+stddev.
Additionally, all of the stateless functions now have proper doc
strings.
This is step one of moving stateless random numbers out of contrib.
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215969360
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
`MapAndBatchDataset` whose user-provided functions have the property that each output argument take its value directly from an input argument (e.g. `lambda x, y: y, x`). This specialization can produce the result without having to schedule the function using the executor.
PiperOrigin-RevId: 215957592
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Enable GPU tests for cond_v2.
PiperOrigin-RevId: 215956220
|
|\ \ \ \
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215947463
|
| |_|_|/
|/| | |
| | | |
| | | | |
PiperOrigin-RevId: 215946205
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215935319
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
attr values that are not overridden e.g. transpose_a in the matmul op).
This is required for backward compatibility (a binary built via an older version
of TF should still run on a newer version of TF, where some ops may have added
attrs).
For non-eager graph building, the default attr values of graph ops are added by
tensorflow::AddDefaultsToNodeDef().
We ran into this issue when running the same S4TF test cases via eager APIs --
some tests failed due to "missing attrs", but are fixed by this patch.
PiperOrigin-RevId: 215927271
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
An environment variable (TF_EAGER_ENABLE_SMALL_TENSOR_CPU_PINNING) is provided to turn this off if necessary (its on by default).
PiperOrigin-RevId: 215821915
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
was flawed. Added better test coverage.
Also added a extra test for a related symbolic shape inference operation that I first suspected to be broken.
PiperOrigin-RevId: 215812753
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215802845
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
(used to be a segfault)
PiperOrigin-RevId: 215791737
|