| Commit message (Collapse) | Author | Age |
|\
| |
| |
| | |
PiperOrigin-RevId: 215331087
|
| |
| |
| |
| |
| |
| |
| | |
overridden at runtime allowing dynamic switching between inference and training
modes. Not fully implemented yet.
PiperOrigin-RevId: 215325071
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215324035
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215313156
|
|\ \
| | |
| | |
| | | |
PiperOrigin-RevId: 215312707
|
| | |
| | |
| | |
| | |
| | |
| | | |
EffectiveOperandPrecisionIsOutputPrecision list.
PiperOrigin-RevId: 215311766
|
| | |
| | |
| | |
| | |
| | |
| | | |
`tf.data.Dataset.with_options()` to make it possible to respectively represent, get, and set options, such as optimization configuration, of a tf.data input pipeline.
PiperOrigin-RevId: 215310764
|
|\ \ \
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215310536
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
coordination.
PiperOrigin-RevId: 215309735
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215307701
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
to replace it.
This change prepares `tf.data` for TensorFlow 2.0, where `tf.contrib` will no longer exist. It retains the pre-existing endpoints in `tf.contrib.data` with deprecation warnings.
Note there are some exceptions to the move:
* Deprecated symbols in `tf.contrib.data` have not been moved to `tf.data.experimental`, because replacements already exist.
* `tf.contrib.data.LMDBDataset` has not been moved, because we plan to move it to a SIG-maintained repository.
* `tf.contrib.data.assert_element_shape()` has not yet been moved, because it depends on functionality in `tf.contrib`, and it will move in a later change.
* `tf.contrib.data.AUTOTUNE` has not yet been moved, because we have not yet determined how to `tf_export()` a Python integer.
* The stats-related API endpoints have not yet appeared in a released version of TensorFlow, so these are moved to `tf.data.experimental` without retaining an endpoint in `tf.contrib.data`.
In addition, this change includes some build rule and ApiDef refactoring:
* Some of the "//third_party/tensorflow/python:training" dependencies had to be split in order to avoid a circular dependency.
* The `tf.contrib.stateless` ops now have a private core library for the generated wrappers (and accordingly are hidden in their ApiDef) so that `tf.data.experimental.sample_from_datasets()` can depend on them.
PiperOrigin-RevId: 215304249
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
output depends on the updates across all mirrors. Before this change,
update() would return a Mirrored value that where each component was
an update to a single mirror. This caused a problem since for reading
purposes other DistributionStrategy methods would consider it okay
to read any single component, and so if you for example did something
like session.run(strategy.update(...)) it would only perform the
update on one replica. The fix is to have the output be a Mirrored
value that is actually the identity operation returning the output on
that device, but that has a control dependency making sure that the
update actually happens on all the replicas. This fix was already
present in MirroredVariable._assign_func, this CL moves the fix into
update() and generalizes it to multiple return values.
To disable this new grouping behavior, you may now pass
"grouped=False" to update(). For example, some callers (like Optimizer)
are performing a lot of updates and they prefer to group all of them
together at once for performance reasons. In this case, we still want
to make sure the caller executes the update on all replicas, so we
return an unwrapped value instead of a Mirrored value. This has the
happy side effect of removing a bunch of unwrap calls in client code,
since unwrapping was the only safe way to use the Mirrored value we
used to return.
PiperOrigin-RevId: 215301909
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
better handle small values.
PiperOrigin-RevId: 215299532
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The previous version was hitting a very slow path in `GetNodeAttr()`, which is expensive when the named attr is not found. This change inlines the logic of finding the two relevant attrs inside `GetFunctionNameAttr()` and avoids constructing a status object with a serialized `NodeDef` when the attr can't be found.
PiperOrigin-RevId: 215298411
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215297961
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
released under Apache 2.0.
PiperOrigin-RevId: 215296386
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
remotely.
PiperOrigin-RevId: 215295504
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215294817
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This cleanup will make the future CL implementing lazy compilation simpler.
Includes some supporting changes:
- Teach NewInternalScope to create a scope that doesn't do shape inference. We
need this because we don't have a ShapeRefiner that has been run over the
entire graph available in the build_xla_ops pass.
- Add a WithAssignedDevice modifier to tensorflow::Scope.
- Make cc_op_gen write out an Operation field for nodes which may not
necessarily have any outputs. We already did this in most cases, but we
weren't doing it for nodes that have possibly-empty list outputs.
- Minor change renaming ops/xla_jit_op.cc to ops/xla_jit_ops.cc, now that we
have more than one XLA JIT op.
PiperOrigin-RevId: 215293817
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215292521
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215291195
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Adam optimizer's variable update formula.
PiperOrigin-RevId: 215290881
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215288224
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215287936
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
They shouldn't help given the automatic control dependencies, and are tricky
to capture in the general case.
PiperOrigin-RevId: 215282837
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215282721
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215278033
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This removes the transitive keras and scipy dependencies in TensorFlow.
PiperOrigin-RevId: 215277190
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215276816
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
- EncodeArg in C instead of python.
- Also caches parsed device specs, and device spec hashes
- Adds a common way to register python types in C.
- Fastpath canonicalize function inputs when no kwargs are passed
- Set the func name attr directly instead of creating an op to wrap it.
- Rewrite IsAttrsHelper without caching
Before:
entry {
name: "MicroBenchmarks.benchmark_defun_matmul_2_by_2_CPU"
iters: 30000
wall_time: 101.803263028
extras {
key: "examples_per_sec"
value {
double_value: 9822.86785562
}
}
}
After:
entry {
name: "MicroBenchmarks.benchmark_defun_matmul_2_by_2_CPU"
iters: 30000
wall_time: 47.2899993261
extras {
key: "examples_per_sec"
value {
double_value: 21146.1199884
}
}
}
PiperOrigin-RevId: 215272962
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215272497
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215272308
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Prior to this change, the lowering pass assumed that the If op
functions would be available in the If op's graph. If the If op is
defined in a defun and then called via eager execution, the functions
will be in the eager context, but not in the defun's graph. This
change makes the lowering pass correctly use the function library
passed in by the caller via GraphOptimizationPassOptions.
PiperOrigin-RevId: 215271990
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215269882
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215266415
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215266241
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215263951
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 215259803
|
|\ \ \ \
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215258743
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215255826
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215254762
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
https://github.com/tensorflow/community/pull/16.
In addition to the changes in the doc, I made the following updates (these changes make sense to me and I didn't notice them when compiling the doc):
* deprecate saved_model.builder.SavedModelBuilder - replaced with saved_model.SavedModelBuilder
* deprecate python_io.tf_record_iterator - replaced with io.tf_record_iterator
* deprecate python_io.TFRecordWriter - replaced with io.TFRecordWriter
* move reduce_join to tf.string
PiperOrigin-RevId: 215253944
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215252408
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
keras.SimpleRNNCell.
PiperOrigin-RevId: 215249611
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215248985
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215248737
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215246174
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 215243030
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
yongtang:22115-tf.contrib.image.transform-float16-gpu
PiperOrigin-RevId: 215240869
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
PiperOrigin-RevId: 215239710
|