| Commit message (Collapse) | Author | Age |
... | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
was flawed. Added better test coverage.
Also added a extra test for a related symbolic shape inference operation that I first suspected to be broken.
PiperOrigin-RevId: 215812753
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215802845
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
(used to be a segfault)
PiperOrigin-RevId: 215791737
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215791283
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215788485
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215780734
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
"character" is treated:
* BYTE: Position & length refer to bytes in the string. (Default)
* UTF8: The string is interpreted as UTF-8 encoded Unicode code points, and position & length are treated relative to them.
RELNOTES: Add option to get substring using Unicode characters
PiperOrigin-RevId: 215773373
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Switch or Merge node.".
PiperOrigin-RevId: 215772272
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
in a lambda
UNLOCK_FUNCTION(ir->out_mu) annotates that the lock is held on entry.
try_lock() should not be called.
PiperOrigin-RevId: 215769341
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously, we were returning an unknown shape in
`Dataset::output_shapes()` for the "most specific compatible shape"
between the two inputs. While this does not cause correctness problems
(since the unknown shape *is* compatible), we gain the ability to
raise errors earlier when more shape information is available.
PiperOrigin-RevId: 215764530
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
(This indirectly handles "Const" outputs automagically, since they are always unstacked.)
PiperOrigin-RevId: 215749824
|
| |/ /
|/| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
During graph construction, the shape function for AssignAddVariableOp etc.
would raise an error when the value being "assign add"ed to the variable
has an incompatible shape.
With eager execution, no such validation was being made which triggerred
an assertion failure in eigen:
https://github.com/eigenteam/eigen-git-mirror/blob/7d97e1cbbe4424fda39e31c88def7c0863897640/unsupported/Eigen/CXX11/src/Tensor/TensorEvaluator.h#L479
This change prevents that assertion failure.
PiperOrigin-RevId: 215749071
|
| | |
| | |
| | |
| | |
| | |
| | | |
Avoids LOG(ERROR) spam when the Executor is unable to find a CPU kernel.
PiperOrigin-RevId: 215738481
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previously, if the rank of the input to this transformation was
statically unknown, we would erroneously report that the output is a
scalar, and violate downstream shape integrity checks. Instead, in
that case the output shape should be unknown.
PiperOrigin-RevId: 215683027
|
| | |
| | |
| | |
| | |
| | |
| | | |
lookback for Identity op. This fixes many performance regressions.
PiperOrigin-RevId: 215662393
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215617800
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In the process, properly place nodes on devices in the collective graph key
test.
PiperOrigin-RevId: 215616146
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
`set_stats_aggregator`. `tag` would get prep-end with all the statistics recorded as summary and `counter_prefix` would set the prefix for the statistics recorded as counter.
Note: `counter` defaults to `\tensorflow`, and `tag` and `prefix` gets associated with the dataset (not the stats_aggregator).
PiperOrigin-RevId: 215609159
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215607769
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215607038
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215595078
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215590440
|
| | |
| | |
| | |
| | |
| | |
| | | |
and the rank derived from the permutation array is 0 or 1, the shape is ambiguous and cannot be determined at graph construction time. In this case, forward the shape of the input.
PiperOrigin-RevId: 215583050
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215579950
|
|\ \ \
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215560522
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215501709
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215492782
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215448397
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
may be replaced by automatic shape inference in TF 2.0 (or before).
Add a output_shapes attr to While op to allow output shapes to be different from the incoming loop_vars.
PiperOrigin-RevId: 215446737
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
StaticRegexReplace.
PiperOrigin-RevId: 215371291
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215338658
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
`tf.data.Dataset.with_options()` to make it possible to respectively represent, get, and set options, such as optimization configuration, of a tf.data input pipeline.
PiperOrigin-RevId: 215310764
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
coordination.
PiperOrigin-RevId: 215309735
|
| | | | |
|
| |\ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
to replace it.
This change prepares `tf.data` for TensorFlow 2.0, where `tf.contrib` will no longer exist. It retains the pre-existing endpoints in `tf.contrib.data` with deprecation warnings.
Note there are some exceptions to the move:
* Deprecated symbols in `tf.contrib.data` have not been moved to `tf.data.experimental`, because replacements already exist.
* `tf.contrib.data.LMDBDataset` has not been moved, because we plan to move it to a SIG-maintained repository.
* `tf.contrib.data.assert_element_shape()` has not yet been moved, because it depends on functionality in `tf.contrib`, and it will move in a later change.
* `tf.contrib.data.AUTOTUNE` has not yet been moved, because we have not yet determined how to `tf_export()` a Python integer.
* The stats-related API endpoints have not yet appeared in a released version of TensorFlow, so these are moved to `tf.data.experimental` without retaining an endpoint in `tf.contrib.data`.
In addition, this change includes some build rule and ApiDef refactoring:
* Some of the "//third_party/tensorflow/python:training" dependencies had to be split in order to avoid a circular dependency.
* The `tf.contrib.stateless` ops now have a private core library for the generated wrappers (and accordingly are hidden in their ApiDef) so that `tf.data.experimental.sample_from_datasets()` can depend on them.
PiperOrigin-RevId: 215304249
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This cleanup will make the future CL implementing lazy compilation simpler.
Includes some supporting changes:
- Teach NewInternalScope to create a scope that doesn't do shape inference. We
need this because we don't have a ShapeRefiner that has been run over the
entire graph available in the build_xla_ops pass.
- Add a WithAssignedDevice modifier to tensorflow::Scope.
- Make cc_op_gen write out an Operation field for nodes which may not
necessarily have any outputs. We already did this in most cases, but we
weren't doing it for nodes that have possibly-empty list outputs.
- Minor change renaming ops/xla_jit_op.cc to ops/xla_jit_ops.cc, now that we
have more than one XLA JIT op.
PiperOrigin-RevId: 215293817
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215292521
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215291195
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Prior to this change, the lowering pass assumed that the If op
functions would be available in the If op's graph. If the If op is
defined in a defun and then called via eager execution, the functions
will be in the eager context, but not in the defun's graph. This
change makes the lowering pass correctly use the function library
passed in by the caller via GraphOptimizationPassOptions.
PiperOrigin-RevId: 215271990
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215269882
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215263951
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215254762
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215248737
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215243030
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
(1) Skip UnaryOpComposition rewrite if the optimized graph needs to have a gradient registered for all nodes.
PiperOrigin-RevId: 215188461
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | | |
PiperOrigin-RevId: 215161850
|
| | |\ \ \ \
| |_|/ / / /
|/| | | | | |
|
| | | | | | |
|
| | |/ / / |
|