| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 216443201
|
|
|
|
| |
PiperOrigin-RevId: 216400726
|
|
|
|
| |
PiperOrigin-RevId: 216395709
|
|
|
|
| |
PiperOrigin-RevId: 216370193
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
are made according to https://github.com/tensorflow/community/pull/16.
I am keeping a few symbols deprecated not mentioned in the doc:
tf.diag - it seems best to keep it next to tf.linalg.diag, so that the two are easy to compare and decide which one to use. The plan is to rename tf.diag to tf.tensor_diag.
tf.is_nan - similar to tf.is_inf, tf.is_finite, tf.is_numeric_tensor which are all getting deprecated and replaced by symbols in tf.debugging.
tf.string_to_number - other string endpoints in root namespace are getting deprecated: for e.g. tf.substr, tf.string_join.
tf.dequantize - all quantization ops should be under tf.quantize. I probably missed this one.
tf.check_numerics - similar to other debugging ops that are getting moved to tf.debugging.
tf.squared_difference - moved to tf.math namespace and not as popular as some other math ops such as tf.add to justify keeping endpoint in root.
tf.decode_raw - similar to other ops such as tf.decode_csv that are getting moved to tf.io.decode_csv.
PiperOrigin-RevId: 216278010
|
|\
| |
| |
| | |
PiperOrigin-RevId: 216217509
|
|\ \
| | |
| | |
| | | |
PiperOrigin-RevId: 215995215
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
stateless_random_uniform now take minval+maxval and handles ints,
and stateless_normal/stateless_truncated_normal take mean+stddev.
Additionally, all of the stateless functions now have proper doc
strings.
This is step one of moving stateless random numbers out of contrib.
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215802845
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215788485
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| | |
"character" is treated:
* BYTE: Position & length refer to bytes in the string. (Default)
* UTF8: The string is interpreted as UTF-8 encoded Unicode code points, and position & length are treated relative to them.
RELNOTES: Add option to get substring using Unicode characters
PiperOrigin-RevId: 215773373
|
| |
| |
| |
| |
| |
| | |
StaticRegexReplace.
PiperOrigin-RevId: 215371291
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
to replace it.
This change prepares `tf.data` for TensorFlow 2.0, where `tf.contrib` will no longer exist. It retains the pre-existing endpoints in `tf.contrib.data` with deprecation warnings.
Note there are some exceptions to the move:
* Deprecated symbols in `tf.contrib.data` have not been moved to `tf.data.experimental`, because replacements already exist.
* `tf.contrib.data.LMDBDataset` has not been moved, because we plan to move it to a SIG-maintained repository.
* `tf.contrib.data.assert_element_shape()` has not yet been moved, because it depends on functionality in `tf.contrib`, and it will move in a later change.
* `tf.contrib.data.AUTOTUNE` has not yet been moved, because we have not yet determined how to `tf_export()` a Python integer.
* The stats-related API endpoints have not yet appeared in a released version of TensorFlow, so these are moved to `tf.data.experimental` without retaining an endpoint in `tf.contrib.data`.
In addition, this change includes some build rule and ApiDef refactoring:
* Some of the "//third_party/tensorflow/python:training" dependencies had to be split in order to avoid a circular dependency.
* The `tf.contrib.stateless` ops now have a private core library for the generated wrappers (and accordingly are hidden in their ApiDef) so that `tf.data.experimental.sample_from_datasets()` can depend on them.
PiperOrigin-RevId: 215304249
|
| |
| |
| |
| |
| |
| |
| |
| | |
NOTE: All ops and kernels previously previously defined in
tensorflow/contrib/data have had their name prefixed with
"Experimental" to indicate that they are not (yet) stable, and thus
not subject to backwards or forwards compatibility guarantees.
PiperOrigin-RevId: 214940819
|
| |
| |
| |
| | |
PiperOrigin-RevId: 214940748
|
| |
| |
| |
| |
| |
| | |
(finite) dataset to a single element.
PiperOrigin-RevId: 214852364
|
| |
| |
| |
| |
| |
| | |
based on standard ranges.
PiperOrigin-RevId: 214796357
|
| |
| |
| |
| | |
PiperOrigin-RevId: 214700693
|
| |
| |
| |
| |
| |
| | |
according to https://github.com/tensorflow/community/pull/16.
PiperOrigin-RevId: 214680285
|
| |\
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
length" is defined:
* BYTE: The number of bytes in each string. (Default)
* UTF8: The number of UTF-8 encoded Unicode code points in each string.
RELNOTES: Add option to calculate string length in Unicode characters
PiperOrigin-RevId: 214478470
|
|\ \
| | |
| | |
| | | |
PiperOrigin-RevId: 214177065
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 214173896
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 213863392
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
standard python `print` method, and deprecates the old `tf.Print` operator (to be removed in in v2.0).
It follows the design doc specified in https://github.com/tensorflow/community/pull/14 and additionally incorporates the community feedback and design review decisions.
This CL adds two new internal graph operators: a StringFormat operator that formats a template string with a list of input tensors to insert into the string and outputs a string scalar containing the result, and a PrintV2 operator that prints a string scalar to a specified output stream or logging level.
The formatting op is exposed at `tf.strings.Format`. A new python method is exposed at `tf.print` that takes a list of inputs that may be nested structures and may contain tensors, formats them nicely using the formatting op, and returns a PrintV2 operator that prints them. In Eager mode and inside defuns this PrintV2 operator will automatically be executed, but in graph mode it will need to be either added to `sess.run`, or used as a control dependency for other operators being executed.
As compared to the previous print function, the new print function:
- Has an API that more closely aligns with the standard python3 print
- Supports changing the print logging level/output stream
- allows printing arbitrary (optionally nested) data structures as opposed to just flat lists of tensors
- support printing sparse tensors
- changes printed tensor format to show more meaningful summary (recursively print the first and last elements of each tensor dimension, instead of just the first few elements of the tensor irregardless of dimension).
PiperOrigin-RevId: 213709924
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
drop_remainder)`, which can be used for combining elements of input dataset into "windows". A window
is itself a finite dataset and, among other things, can be used for generalized batching (see https://github.com/tensorflow/community/pull/5 for details).
PiperOrigin-RevId: 213360134
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
tf.decode_csv:
- Modify shape assertions so that both graph and eager accept rank 0 (scalar) and rank 1 tensors as `record_defaults`, and raise an error on other shapes.
- Make tests run in both graph and eager modes
Fixes #22030.
PiperOrigin-RevId: 212877058
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
the start of the substring will be counted backwards from the end of the string.
RELNOTES: Support negative positions for tf.substr
PiperOrigin-RevId: 212720335
|
| | |
| | |
| | |
| | |
| | |
| | | |
performance.
PiperOrigin-RevId: 212557406
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Allow each Resource to manage multiple streams that share the same quantile config -- number of quantiles and epsilon. Previously each resource manage only one stream, so we will have to create resources equal to the number of features, which is cumbersome when input is high dimensional. If 1000 features use 100 quantiles (which is hardcoded today), then 1000 resources is required. This cl will create the number of resources linear to the number of parameter servers, if 2 parameter servers are present, then only 2 resources is required, one for each ps.
Remove time stamp token as the ops are called once.
PiperOrigin-RevId: 212533735
|
| | |\
| |_|/
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
`tf.data.Dataset.interleave`.
Unlike the `tf.data.contrib.parallel_interleave` whose parallelism is tied to the `cycle_length` argument, the newly introduced `num_parallel_calls` argument of `tf.data.Dataset.interleave` is decoupled from the `cycle_length` argument and identifies the degree of parallelism to use for fetching output elements.
PiperOrigin-RevId: 211886816
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
the regex pattern are fixed.
This allows the Op to perform the expensive regex compilation once upon creation instead of with each call to compute.
RELNOTES: Performance improvements for regex full match operations.
PiperOrigin-RevId: 211835278
|
| | |
| | |
| | |
| | |
| | |
| | | |
RELNOTES: n/a
PiperOrigin-RevId: 211798892
|
| | |
| | |
| | |
| | |
| | |
| | | |
RELNOTES: n/a
PiperOrigin-RevId: 211798876
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Hopefully this makes the formatting of the end of https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/strided-slice not look messed up.
PiperOrigin-RevId: 211132208
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 211082479
|
| | |
| | |
| | |
| | |
| | |
| | | |
This closes one API hole between TensorList and TensorArray
PiperOrigin-RevId: 210932049
|
| | |
| | |
| | |
| | |
| | |
| | | |
`features` and `feature-values` count statistics.
PiperOrigin-RevId: 210828171
|
| | |
| | |
| | |
| | |
| | |
| | | |
validates the true shape of the tensor at runtime.
PiperOrigin-RevId: 210570878
|
|\ \ \
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 210392464
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
python function.
PiperOrigin-RevId: 210180168
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
which will replace dataset.map(parsing_ops.parse_example(..)).
PiperOrigin-RevId: 209836033
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 209627830
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This operation computes:
ref[i_1, ..., i_n, indices[i_1, ..., i_n, j]] = updates[i_1, ..., i_n, j]
That is, it assumes that `ref`, `indices` and `updates` have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices.
PiperOrigin-RevId: 209566652
|
|\ \ \ \
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 209508149
|
| | | | | |
|