| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 216479972
|
|
|
|
|
|
| |
Support peephole and num_proj as well.
PiperOrigin-RevId: 216467578
|
|
|
|
| |
PiperOrigin-RevId: 216463491
|
|
|
|
| |
PiperOrigin-RevId: 216451263
|
|
|
|
| |
PiperOrigin-RevId: 216442569
|
|
|
|
| |
PiperOrigin-RevId: 216395709
|
|\
| |
| |
| | |
PiperOrigin-RevId: 216245301
|
| |
| |
| |
| |
| |
| | |
benchmarking. At the moment, it returns a default config with only Grappler dependency optimizer disabled. Many benchmarks wrap the subgraph they want to time in control_flow_ops.group() to avoid including the overhead of copying the output back to the Python client in the measurement. In the graph, this only adds a control dependency between the subgraph output and the fetch node, which in turn (often) causes the dependency optimizer to turn all nodes in the graph into no-ops.
PiperOrigin-RevId: 216242463
|
| |
| |
| |
| | |
PiperOrigin-RevId: 216211279
|
| |
| |
| |
| |
| |
| |
| | |
The extra spaces were confusing bash's string-line-continuation from
the backslash `\` on the previous line.
PiperOrigin-RevId: 215964853
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215950376
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215808649
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215793932
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
"character" is treated:
* BYTE: Position & length refer to bytes in the string. (Default)
* UTF8: The string is interpreted as UTF-8 encoded Unicode code points, and position & length are treated relative to them.
RELNOTES: Add option to get substring using Unicode characters
PiperOrigin-RevId: 215773373
|
| |
| |
| |
| |
| |
| | |
We will need this for remote-build presubmits to pass.
PiperOrigin-RevId: 215760872
|
| |
| |
| |
| |
| |
| |
| |
| | |
tf.gradients currently returns [NONE] when the gradient of unconnected variables
is required. This backwards compatable change adds in the option to have zero
tensors returned that match the dimensions of the input tensor.
PiperOrigin-RevId: 215725488
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215710849
|
| |
| |
| |
| |
| |
| | |
https://github.com/pypa/auditwheel/issues/102
PiperOrigin-RevId: 215685104
|
|/ |
|
|
|
|
|
|
|
| |
This is particularly important when using --run_under with
parallel_gpu_execute, since the envvars control the execution.
PiperOrigin-RevId: 215637931
|
|
|
|
|
|
|
|
| |
`set_stats_aggregator`. `tag` would get prep-end with all the statistics recorded as summary and `counter_prefix` would set the prefix for the statistics recorded as counter.
Note: `counter` defaults to `\tensorflow`, and `tag` and `prefix` gets associated with the dataset (not the stats_aggregator).
PiperOrigin-RevId: 215609159
|
|
|
|
| |
PiperOrigin-RevId: 215605865
|
|
|
|
|
|
| |
https://github.com/pypa/auditwheel/issues/102
PiperOrigin-RevId: 215486669
|
|
|
|
|
|
|
| |
There is no known conceptual reason we can't use XLA, but in practice
we have some build issues that will need to be fixed.
PiperOrigin-RevId: 215484942
|
|\
| |
| |
| | |
PiperOrigin-RevId: 215483141
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215479788
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215477724
|
|\ \
| | |
| | |
| | | |
PiperOrigin-RevId: 215447391
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
`tf.data.Dataset.with_options()` to make it possible to respectively represent, get, and set options, such as optimization configuration, of a tf.data input pipeline.
PiperOrigin-RevId: 215310764
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
to replace it.
This change prepares `tf.data` for TensorFlow 2.0, where `tf.contrib` will no longer exist. It retains the pre-existing endpoints in `tf.contrib.data` with deprecation warnings.
Note there are some exceptions to the move:
* Deprecated symbols in `tf.contrib.data` have not been moved to `tf.data.experimental`, because replacements already exist.
* `tf.contrib.data.LMDBDataset` has not been moved, because we plan to move it to a SIG-maintained repository.
* `tf.contrib.data.assert_element_shape()` has not yet been moved, because it depends on functionality in `tf.contrib`, and it will move in a later change.
* `tf.contrib.data.AUTOTUNE` has not yet been moved, because we have not yet determined how to `tf_export()` a Python integer.
* The stats-related API endpoints have not yet appeared in a released version of TensorFlow, so these are moved to `tf.data.experimental` without retaining an endpoint in `tf.contrib.data`.
In addition, this change includes some build rule and ApiDef refactoring:
* Some of the "//third_party/tensorflow/python:training" dependencies had to be split in order to avoid a circular dependency.
* The `tf.contrib.stateless` ops now have a private core library for the generated wrappers (and accordingly are hidden in their ApiDef) so that `tf.data.experimental.sample_from_datasets()` can depend on them.
PiperOrigin-RevId: 215304249
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215291195
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215282721
|
| | |
| | |
| | |
| | |
| | |
| | | |
This removes the transitive keras and scipy dependencies in TensorFlow.
PiperOrigin-RevId: 215277190
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
https://github.com/tensorflow/community/pull/16.
In addition to the changes in the doc, I made the following updates (these changes make sense to me and I didn't notice them when compiling the doc):
* deprecate saved_model.builder.SavedModelBuilder - replaced with saved_model.SavedModelBuilder
* deprecate python_io.tf_record_iterator - replaced with io.tf_record_iterator
* deprecate python_io.TFRecordWriter - replaced with io.TFRecordWriter
* move reduce_join to tf.string
PiperOrigin-RevId: 215253944
|
| | |
| | |
| | |
| | |
| | |
| | | |
keras.SimpleRNNCell.
PiperOrigin-RevId: 215249611
|
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes #21719
PiperOrigin-RevId: 215154273
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215120867
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215073584
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, we run tests on machines with GPUs based on the "gpu" tag, and the
tests automatically adapt to whether a GPU is available. Creating two targets,
one tagged with "gpu" and one not, will make us run the tests in both modes.
PiperOrigin-RevId: 215045035
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215005698
|
| |
| |
| |
| |
| |
| | |
core (and added that as a base class for all the contrib tests). Also changed the assertDatasetsEqual functions so they are both graph and eager compatible (took the code from CSVDatasetTest) :)
PiperOrigin-RevId: 215004892
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
the duration of a single RunInternal() call from RunHandlerPool. It is used for
running inter-op closures with a global scheduler (which in the future) to
improve both median and tail latency (for use-cases like CPU inference).
In the case that global pools aren't used, this change should be a no-op.
PiperOrigin-RevId: 214992852
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Without this the default toolchain is used for a subset of the build and the
tests do not actually run on GPUs.
This uncovered a setup problem in the Docker image that needed fixing.
PiperOrigin-RevId: 214987676
|
| |
| |
| |
| |
| |
| |
| | |
Building binaries with XLA support does not enable it by default, it
simply makes it accessible via default binary builds.
PiperOrigin-RevId: 214942824
|