aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/core/kernels/serialize_sparse_op.cc
Commit message (Collapse)AuthorAge
* Use the safe sparse tensor API that returns errors rather than crashingGravatar A. Unique TensorFlower2018-07-16
| | | | | | in all TensorFlow core kernels. PiperOrigin-RevId: 204782675
* Move `DeserializeSparseOp<string>` into its own file, mirroring the Variant ↵Gravatar Derek Murray2018-06-29
| | | | | | version. PiperOrigin-RevId: 202683951
* [tf.data] Optimize the implementation of DeserializeSparseOp<Variant>.Gravatar Derek Murray2018-06-29
| | | | | | The most expensive part of this kernel is the index construction. The optimized implementation builds the new index matrix at most once, rather than performing up to 3 passes (adding a leading dimension, `SparseTensor::Concat()` and `Reshape()`), and adds a specialized codepath for the common case of stacking together rank-1 SparseTensors. PiperOrigin-RevId: 202669432
* Actually enable the n=1 special case in the DeserializeSparse op.Gravatar Derek Murray2018-06-27
| | | | | | The optimized case was previously dead because of an off-by-one error (mea culpa). PiperOrigin-RevId: 202312987
* [tf.data] Fix deserialization of scalar sparse tensors.Gravatar Derek Murray2018-06-27
| | | | | | Previously, attempting to deserialize a tensor containing one or more SparseTensor objects would trigger an assertion failure, either in Eigen (because it attempted to manipulate an empty Eigen tensor) or the SparseTensor Reshape() code (which assumed all ranks were >= 1). PiperOrigin-RevId: 202303856
* Enable the n=1 special case in the DeserializeSparse op.Gravatar Derek Murray2018-04-18
| | | | | | The optimized case was previously dead because of two off-by-one errors (mea culpa). PiperOrigin-RevId: 193314065
* Mark the `SerializeSparseOp<Variant>` kernel as inexpensive.Gravatar Derek Murray2018-02-21
| | | | | | | | Since this op only performs a constant amount of work, and typically executes in a few microseconds, it should be profitable to execute this op inline, rather than scheduling it on a remote thread. PiperOrigin-RevId: 186522885
* TF_CALL_ALL_TYPES should include variantGravatar Alexandre Passos2018-02-02
| | | | PiperOrigin-RevId: 184347081
* Add n=1 special case to the DeserializeSparse op.Gravatar Derek Murray2017-12-08
| | | | | | | | | | | | | | This avoids excessive copying in the common case where the sparse-format output of a `tf.data.Dataset` pipeline or the input to a `Dataset.map()` or `Dataset.filter()` transformation contains a single `tf.SparseTensor`. As I was refactoring to add the special case, I ended up removing the template parameter for the output values' tensor DataType, and switching the sole reamining code that depends on it to use a `switch` on the `"dtype"` attr. This will reduce the binary size for this op. PiperOrigin-RevId: 178404305
* Adding variant-based serialization and deserialization for sparse tensors.Gravatar Jiri Simsa2017-12-05
| | | | PiperOrigin-RevId: 177971801
* Re-using (the more general) DeserializeSparse kernel to implement ↵Gravatar Jiri Simsa2017-11-28
| | | | | | DeserializeSparseMany and improving documentation. PiperOrigin-RevId: 177241063
* [tf.data] Allow the DeserializeSparse op to accept inconsistent dense shapes.Gravatar Derek Murray2017-11-22
| | | | | | | | | | | | | | | | | | | This changes DeserializeSparse to match the behavior of DeserializeSparseMany and TakeManySparseFromTensorsMap, and thus makes `Dataset.batch()` on sparse tensors match the existing behavior of `tf.train.batch()` and family. The rationale for this change is that the source of many `tf.SparseTensor` objects is `tf.parse[_single]_example()`, and that operation does not try to ensure that consecutive `SparseTensor` objects parsed from the same feature specification have the same `dense_shape`. As a result, the behavior of existing ops that batch `SparseTensor` objects has been to silently pad those objects to the bounding dense_shape, by taking the maximum over each dimension size. While this does reduce our ability to make consistency checks in the `SparseTensor`-handling code, pragmatically we never get consistently shaped `SparseTensor`s in real programs, so this seems like a reasonable path for usability. PiperOrigin-RevId: 176697720
* Adding support for (nested) batching of sparse tensor for tf.data.Gravatar Jiri Simsa2017-11-20
| | | | PiperOrigin-RevId: 176444931
* Supporting sparse tensors as inputs and outputs for user-defined functions ↵Gravatar Jiri Simsa2017-11-13
| | | | | | passed into tf.data transformations. PiperOrigin-RevId: 175559045
* Prepare to remove a bunch of proto.h includes from tensorflow/core headersGravatar Geoffrey Irving2017-06-29
| | | | | | | | | | | | The goal is to make kernels mostly independent of proto headers, which will let us lock down our .so imports. This CL does not remove any actual headers, but changes a bunch of files so that header removal is possible in a followup CL. It also marks the headers that will be removed with // TODO(b/62899350): Remove RELNOTES: n/a PiperOrigin-RevId: 160552878
* Update internal SparseTensor C++ implementation to use a vector of int64.Gravatar Eugene Brevdo2017-06-09
| | | | | | | | The current behavior, which relies on a TensorShape to store the dense shape, can lead to CHECK failures if a SparseTensor is created with a dense_shape that is too lare. PiperOrigin-RevId: 158521473
* Preallocate vector storage when the ultimate vector size is known in advanceGravatar A. Unique TensorFlower2017-06-01
| | | | PiperOrigin-RevId: 157724431
* Update copyright for 3p/tf/core.Gravatar A. Unique TensorFlower2016-06-02
| | | | Change: 123900938
* Fix an error message in tf.sparse_to_dense to include the possibility that ↵Gravatar A. Unique TensorFlower2016-02-25
| | | | | | indices are invalid because they are out of bounds. Change: 115522264
* Global search & replace to move to the new location forGravatar Josh Levenberg2016-01-26
| | | | | tensorflow/core/ files and build targets. Change: 113075177
* Running our linter on a lot of files.Gravatar Vijay Vasudevan2016-01-24
| | | | Change: 112920860
* Move #include <vector> out of port.h to users of std::vector<>.Gravatar Josh Levenberg2016-01-21
| | | | | After this we can replace port.h with types.h. Change: 112727463
* TensorFlow: upstream changes to git.Gravatar Vijay Vasudevan2015-12-08
Change 109695551 Update FAQ Change 109694725 Add a gradient for resize_bilinear op. Change 109694505 Don't mention variables module in docs variables.Variable should be tf.Variable. Change 109658848 Adding an option to create a new thread-pool for each session. Change 109640570 Take the snapshot of stream-executor. + Expose an interface for scratch space allocation in the interface. Change 109638559 Let image_summary accept uint8 input This allows users to do their own normalization / scaling if the default (very weird) behavior of image_summary is undesired. This required a slight tweak to fake_input.cc to make polymorphically typed fake inputs infer if their type attr is not set but has a default. Unfortunately, adding a second valid type to image_summary *disables* automatic implicit conversion from np.float64 to tf.float32, so this change is slightly backwards incompatible. Change 109636969 Add serialization operations for SparseTensor. Change 109636644 Update generated Op docs. Change 109634899 TensorFlow: add a markdown file for producing release notes for our releases. Seed with 0.5.0 with a boring but accurate description. Change 109634502 Let histogram_summary take any realnumbertype It used to take only floats, not it understands ints. Change 109634434 TensorFlow: update locations where we mention python 3 support, update them to current truth. Change 109632108 Move HSV <> RGB conversions, grayscale conversions, and adjust_* ops back to tensorflow - make GPU-capable version of RGBToHSV and HSVToRGB, allows only float input/output - change docs to reflect new size constraints - change HSV format to be [0,1] for all components - add automatic dtype conversion for all adjust_* and grayscale conversion ops - fix up docs Change 109631077 Improve optimizer exceptions 1. grads_and_vars is now a tuple, so must be wrapped when passed to format. 2. Use '%r' instead of '%s' for dtype formatting Base CL: 109697989