| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
`MapAndBatchDataset` whose user-provided functions have the property that each output argument take its value directly from an input argument (e.g. `lambda x, y: y, x`). This specialization can produce the result without having to schedule the function using the executor.
PiperOrigin-RevId: 216206232
|
|
|
|
| |
PiperOrigin-RevId: 216203408
|
|
|
|
| |
PiperOrigin-RevId: 215969360
|
|
|
|
|
|
| |
`MapAndBatchDataset` whose user-provided functions have the property that each output argument take its value directly from an input argument (e.g. `lambda x, y: y, x`). This specialization can produce the result without having to schedule the function using the executor.
PiperOrigin-RevId: 215957592
|
|
|
|
| |
PiperOrigin-RevId: 215808649
|
|
|
|
| |
PiperOrigin-RevId: 215788485
|
|
|
|
|
|
| |
This change splits up large test files into smaller ones, and re-enables tests that were disabled for obsolete reasons.
PiperOrigin-RevId: 215785396
|
|
|
|
|
|
| |
(This indirectly handles "Const" outputs automagically, since they are always unstacked.)
PiperOrigin-RevId: 215749824
|
|
|
|
| |
PiperOrigin-RevId: 215710849
|
|
|
|
|
|
|
|
|
| |
Previously, if the rank of the input to this transformation was
statically unknown, we would erroneously report that the output is a
scalar, and violate downstream shape integrity checks. Instead, in
that case the output shape should be unknown.
PiperOrigin-RevId: 215683027
|
|
|
|
| |
PiperOrigin-RevId: 215637785
|
|
|
|
|
|
|
|
| |
`set_stats_aggregator`. `tag` would get prep-end with all the statistics recorded as summary and `counter_prefix` would set the prefix for the statistics recorded as counter.
Note: `counter` defaults to `\tensorflow`, and `tag` and `prefix` gets associated with the dataset (not the stats_aggregator).
PiperOrigin-RevId: 215609159
|
|
|
|
| |
PiperOrigin-RevId: 215607171
|
|
|
|
| |
PiperOrigin-RevId: 215593867
|
|
|
|
| |
PiperOrigin-RevId: 215592456
|
|
|
|
| |
PiperOrigin-RevId: 215503549
|
|
|
|
|
|
| |
`make_one_shot_iterator` which is to be deprecated in future.
PiperOrigin-RevId: 215491729
|
|
|
|
| |
PiperOrigin-RevId: 215473351
|
|
|
|
|
|
| |
`tf.data.Dataset.with_options()` to make it possible to respectively represent, get, and set options, such as optimization configuration, of a tf.data input pipeline.
PiperOrigin-RevId: 215310764
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to replace it.
This change prepares `tf.data` for TensorFlow 2.0, where `tf.contrib` will no longer exist. It retains the pre-existing endpoints in `tf.contrib.data` with deprecation warnings.
Note there are some exceptions to the move:
* Deprecated symbols in `tf.contrib.data` have not been moved to `tf.data.experimental`, because replacements already exist.
* `tf.contrib.data.LMDBDataset` has not been moved, because we plan to move it to a SIG-maintained repository.
* `tf.contrib.data.assert_element_shape()` has not yet been moved, because it depends on functionality in `tf.contrib`, and it will move in a later change.
* `tf.contrib.data.AUTOTUNE` has not yet been moved, because we have not yet determined how to `tf_export()` a Python integer.
* The stats-related API endpoints have not yet appeared in a released version of TensorFlow, so these are moved to `tf.data.experimental` without retaining an endpoint in `tf.contrib.data`.
In addition, this change includes some build rule and ApiDef refactoring:
* Some of the "//third_party/tensorflow/python:training" dependencies had to be split in order to avoid a circular dependency.
* The `tf.contrib.stateless` ops now have a private core library for the generated wrappers (and accordingly are hidden in their ApiDef) so that `tf.data.experimental.sample_from_datasets()` can depend on them.
PiperOrigin-RevId: 215304249
|
|
|
|
| |
PiperOrigin-RevId: 215073584
|
|
|
|
|
|
|
|
| |
Currently, we run tests on machines with GPUs based on the "gpu" tag, and the
tests automatically adapt to whether a GPU is available. Creating two targets,
one tagged with "gpu" and one not, will make us run the tests in both modes.
PiperOrigin-RevId: 215045035
|
|
|
|
|
|
| |
core (and added that as a base class for all the contrib tests). Also changed the assertDatasetsEqual functions so they are both graph and eager compatible (took the code from CSVDatasetTest) :)
PiperOrigin-RevId: 215004892
|
|
|
|
|
|
| |
(finite) dataset to a single element.
PiperOrigin-RevId: 214852364
|
|
|
|
| |
PiperOrigin-RevId: 214781794
|
|
|
|
| |
PiperOrigin-RevId: 214495925
|
|
|
|
| |
PiperOrigin-RevId: 214296771
|
|
|
|
|
|
|
|
|
|
|
| |
This change switches `tf.contrib.data.Optional` to use a `Structure` class to represent
the structure of its value, instead of `output_types`, `output_shapes`, and `output_classes` properties. It adds support for nesting `Optional` objects and representing their structure.
This change also makes a modification to the `Structure` class: `Structure.is_compatible_with(x)` now takes another `Structure` as the `x` argument, instead of a value. This makes it easier to work with nested structures (where we might not have a value readily available), and better matches the interface of other `is_compatible_with()` methods (e.g. in `tf.TensorShape` and `tf.DType`).
Finally, in the process of making this change, I observed possible crash-failures when a DT_VARIANT tensor containing another DT_VARIANT tensor is copied between CPU and GPU. This change "fixes" the immediate problem by raising an UnimplementedError, but more work will be necessary to support the full range of use cases.
PiperOrigin-RevId: 214198993
|
|
|
|
| |
PiperOrigin-RevId: 214173896
|
|\
| |
| |
| | |
PiperOrigin-RevId: 214058098
|
| | |
|
| |
| |
| |
| |
| |
| | |
MapDataset.
PiperOrigin-RevId: 213555982
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
drop_remainder)`, which can be used for combining elements of input dataset into "windows". A window
is itself a finite dataset and, among other things, can be used for generalized batching (see https://github.com/tensorflow/community/pull/5 for details).
PiperOrigin-RevId: 213360134
|
| |
| |
| |
| |
| |
| |
| |
| | |
self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about:
* the fact that the session may be reused.
* the session is not closed even when doing a "with self.test_session()" statement.
PiperOrigin-RevId: 213326167
|
| |
| |
| |
| | |
PiperOrigin-RevId: 212551965
|
| |
| |
| | |
Rework based on Marks review
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about:
* the fact that the session may be reused.
* the session is not closed even when doing a "with self.test_session()" statement.
PiperOrigin-RevId: 212336464
|
| |
| |
| |
| |
| |
| |
| |
| | |
self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about:
* the fact that the session may be reused.
* the session is not closed even when doing a "with self.test_session()" statement.
PiperOrigin-RevId: 212336258
|
| |
| |
| |
| | |
PiperOrigin-RevId: 212177437
|
| |
| |
| |
| | |
PiperOrigin-RevId: 212054927
|
| |
| |
| |
| |
| |
| | |
removing unused `graph_def_version` field
PiperOrigin-RevId: 212054031
|
|/
|
| |
The examples in interleave are quite helpful. I just added a reference to this example
|
|
|
|
|
|
|
|
| |
`tf.data.Dataset.interleave`.
Unlike the `tf.data.contrib.parallel_interleave` whose parallelism is tied to the `cycle_length` argument, the newly introduced `num_parallel_calls` argument of `tf.data.Dataset.interleave` is decoupled from the `cycle_length` argument and identifies the degree of parallelism to use for fetching output elements.
PiperOrigin-RevId: 211886816
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Remove unnecessary use of test_session() in tests that run with eager
execution enabled.
- Use cached_session() instead of test_session()
(self.test_session() has been deprecated in
9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the
test. Moving to cached_session() instead which is more explicit about:
* the fact that the session may be reused.
* the session is not closed even when doing a "with self.test_session()"
statement.)
PiperOrigin-RevId: 211562969
|
|
|
|
|
|
| |
`MapDataset`.
PiperOrigin-RevId: 211520001
|
|
|
|
|
|
|
|
|
| |
input datasets are of type dictionaries with different set of key(s).
FIXES #20626
REL_NOTES: bug fix in `tf.data.Dataset.concatenate()`, now throws error if trying to concatenate two datasets of type dictionaries with different set of key(s).
PiperOrigin-RevId: 209845337
|
|
|
|
|
|
| |
self.test_session() has been deprecated in cl/208545396 as its behavior confuses readers of the test. Moving to self.session() instead.
PiperOrigin-RevId: 209696110
|
|
|
|
|
|
|
|
|
|
| |
Previously, a function instantiation error (e.g. in `Dataset.map()`) would lead
to an error in each GetNext() call that attempted to use the function. Moving this
to iterator instantiation time has the benefit that the error will be reported
once when the initialization op is executed, which has a more helpful stack
trace, since it should not be conflated with other potential op failures.
PiperOrigin-RevId: 209633511
|