| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
in all TensorFlow core kernels.
PiperOrigin-RevId: 204782675
|
|
|
|
|
|
| |
version.
PiperOrigin-RevId: 202683951
|
|
|
|
|
|
| |
The most expensive part of this kernel is the index construction. The optimized implementation builds the new index matrix at most once, rather than performing up to 3 passes (adding a leading dimension, `SparseTensor::Concat()` and `Reshape()`), and adds a specialized codepath for the common case of stacking together rank-1 SparseTensors.
PiperOrigin-RevId: 202669432
|
|
|
|
|
|
| |
The optimized case was previously dead because of an off-by-one error (mea culpa).
PiperOrigin-RevId: 202312987
|
|
|
|
|
|
| |
Previously, attempting to deserialize a tensor containing one or more SparseTensor objects would trigger an assertion failure, either in Eigen (because it attempted to manipulate an empty Eigen tensor) or the SparseTensor Reshape() code (which assumed all ranks were >= 1).
PiperOrigin-RevId: 202303856
|
|
|
|
|
|
| |
The optimized case was previously dead because of two off-by-one errors (mea culpa).
PiperOrigin-RevId: 193314065
|
|
|
|
|
|
|
|
| |
Since this op only performs a constant amount of work, and typically
executes in a few microseconds, it should be profitable to execute
this op inline, rather than scheduling it on a remote thread.
PiperOrigin-RevId: 186522885
|
|
|
|
| |
PiperOrigin-RevId: 184347081
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This avoids excessive copying in the common case where the
sparse-format output of a `tf.data.Dataset` pipeline or the input to a
`Dataset.map()` or `Dataset.filter()` transformation contains a single
`tf.SparseTensor`.
As I was refactoring to add the special case, I ended up removing the
template parameter for the output values' tensor DataType, and
switching the sole reamining code that depends on it to use a `switch`
on the `"dtype"` attr. This will reduce the binary size for this op.
PiperOrigin-RevId: 178404305
|
|
|
|
| |
PiperOrigin-RevId: 177971801
|
|
|
|
|
|
| |
DeserializeSparseMany and improving documentation.
PiperOrigin-RevId: 177241063
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This changes DeserializeSparse to match the behavior of DeserializeSparseMany
and TakeManySparseFromTensorsMap, and thus makes `Dataset.batch()` on sparse
tensors match the existing behavior of `tf.train.batch()` and family.
The rationale for this change is that the source of many `tf.SparseTensor`
objects is `tf.parse[_single]_example()`, and that operation does not try
to ensure that consecutive `SparseTensor` objects parsed from the same
feature specification have the same `dense_shape`. As a result, the behavior
of existing ops that batch `SparseTensor` objects has been to silently pad
those objects to the bounding dense_shape, by taking the maximum over each
dimension size. While this does reduce our ability to make consistency checks
in the `SparseTensor`-handling code, pragmatically we never get consistently
shaped `SparseTensor`s in real programs, so this seems like a reasonable path
for usability.
PiperOrigin-RevId: 176697720
|
|
|
|
| |
PiperOrigin-RevId: 176444931
|
|
|
|
|
|
| |
passed into tf.data transformations.
PiperOrigin-RevId: 175559045
|
|
|
|
|
|
|
|
|
|
|
|
| |
The goal is to make kernels mostly independent of proto headers, which will let
us lock down our .so imports. This CL does not remove any actual headers, but
changes a bunch of files so that header removal is possible in a followup CL.
It also marks the headers that will be removed with
// TODO(b/62899350): Remove
RELNOTES: n/a
PiperOrigin-RevId: 160552878
|
|
|
|
|
|
|
|
| |
The current behavior, which relies on a TensorShape to store the dense shape,
can lead to CHECK failures if a SparseTensor is created with a dense_shape that is
too lare.
PiperOrigin-RevId: 158521473
|
|
|
|
| |
PiperOrigin-RevId: 157724431
|
|
|
|
| |
Change: 123900938
|
|
|
|
|
|
| |
indices are invalid because they are out of bounds.
Change: 115522264
|
|
|
|
|
| |
tensorflow/core/ files and build targets.
Change: 113075177
|
|
|
|
| |
Change: 112920860
|
|
|
|
|
| |
After this we can replace port.h with types.h.
Change: 112727463
|
|
Change 109695551
Update FAQ
Change 109694725
Add a gradient for resize_bilinear op.
Change 109694505
Don't mention variables module in docs
variables.Variable should be tf.Variable.
Change 109658848
Adding an option to create a new thread-pool for each session.
Change 109640570
Take the snapshot of stream-executor.
+ Expose an interface for scratch space allocation in the interface.
Change 109638559
Let image_summary accept uint8 input
This allows users to do their own normalization / scaling if the default
(very weird) behavior of image_summary is undesired.
This required a slight tweak to fake_input.cc to make polymorphically typed
fake inputs infer if their type attr is not set but has a default.
Unfortunately, adding a second valid type to image_summary *disables* automatic
implicit conversion from np.float64 to tf.float32, so this change is slightly
backwards incompatible.
Change 109636969
Add serialization operations for SparseTensor.
Change 109636644
Update generated Op docs.
Change 109634899
TensorFlow: add a markdown file for producing release notes for our
releases. Seed with 0.5.0 with a boring but accurate description.
Change 109634502
Let histogram_summary take any realnumbertype
It used to take only floats, not it understands ints.
Change 109634434
TensorFlow: update locations where we mention python 3 support, update
them to current truth.
Change 109632108
Move HSV <> RGB conversions, grayscale conversions, and adjust_* ops back to tensorflow
- make GPU-capable version of RGBToHSV and HSVToRGB, allows only float input/output
- change docs to reflect new size constraints
- change HSV format to be [0,1] for all components
- add automatic dtype conversion for all adjust_* and grayscale conversion ops
- fix up docs
Change 109631077
Improve optimizer exceptions
1. grads_and_vars is now a tuple, so must be wrapped when passed to format.
2. Use '%r' instead of '%s' for dtype formatting
Base CL: 109697989
|