| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
not found in the namespace in Python 3.
PiperOrigin-RevId: 213879813
|
|
|
|
|
|
| |
Minor cleanup by moving the helper function ShapesEqual to GraphProperties and adding unit tests for it.
PiperOrigin-RevId: 213876779
|
|
|
|
| |
PiperOrigin-RevId: 213875284
|
|
|
|
| |
PiperOrigin-RevId: 213873471
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. In ParallelMapIterator, do not call `cond_var_.notify_all()` without holding
the associated mutex. In some cases, the iterator may have been deleted
between releasing the lock and notifying the condition variable, which
leads to a use-after-free. This change applies this style to all use of
condition variables in tensorflow/core/kernels/data/.
2. In CapturedFunction::RunAsync(), do not use `shared_ptr` to manage
the lifetime of objects that (potentially) borrow from runtime
objects. The present code runs the destructor after the `done()`
callback is called, but the `done()` callback may be the last
action in a session, and thus trigger destruction of those borrowed
objects. In that case, the `shared_ptr` destructor may use the
borrowed objects after they are freed.
PiperOrigin-RevId: 213872829
|
|
|
|
|
|
|
|
| |
detection models.
As part of this CL, we use the Keras mobilenet_v2 application's keyword argument layer injection API to allow the generated network to support the object detection hyperparameters.
PiperOrigin-RevId: 213872175
|
|
|
|
| |
PiperOrigin-RevId: 213872127
|
|
|
|
| |
PiperOrigin-RevId: 213867606
|
|
|
|
| |
PiperOrigin-RevId: 213866466
|
|
|
|
| |
PiperOrigin-RevId: 213863392
|
|
|
|
|
|
| |
https://github.com/tensorflow/community/pull/13
PiperOrigin-RevId: 213862844
|
|
|
|
|
|
| |
problematic to use in eager because of the circular references it creates.
PiperOrigin-RevId: 213862402
|
|
|
|
|
|
| |
refactoring it, adding several new fields and an EmbeddingOutputLayout message to provide experimental support for controlling the embedding output.
PiperOrigin-RevId: 213849572
|
|
|
|
|
|
| |
number of filtered elements to monitoring counter.
PiperOrigin-RevId: 213846793
|
|
|
|
|
|
|
| |
With the exception of StrCat all of these are using absl already, this change
just removes one layer of indirection.
PiperOrigin-RevId: 213846036
|
|\
| |
| |
| | |
PiperOrigin-RevId: 213844688
|
| |
| |
| |
| |
| |
| | |
optimization
PiperOrigin-RevId: 213840320
|
|\ \
| | |
| | |
| | | |
PiperOrigin-RevId: 213836802
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 213829360
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 213801006
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
These have the same behavior as unquantized types so we can just pass them
through to XLA (which converts them to unquantized types). They're supposed to
be used with special ops, none of which are currently implemented by XLA.
Casting (without quantization) and basic math works fine though.
These do not have a corresponding numpy type, so only tests using TF types will
see them.
PiperOrigin-RevId: 213781650
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 213773990
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 213771631
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 213770000
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 213764810
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The only tensorflow op that uses XlaSort is nn.top_k, so we add a test case
using nn.top_k.
PiperOrigin-RevId: 213763591
|
| | |
| | |
| | |
| | |
| | |
| | | |
This parameter has been added to HLO to support depthwise convolution.
PiperOrigin-RevId: 213761790
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 213753728
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 213749129
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 213737482
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
It's desirable to run int64 compute on GPU. Rolling back the folowing CL.
*** Original change description ***
Register a new Sum op for T:int64 and Tidx:int32
END_PUBLIC
Automated rollback of commit a9a5929d06e5eb4dd38bef63d56c4e338bbd38a2
PiperOrigin-RevId: 213736058
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
tensors in Graph mode defun.
This allows inferring the shape of values popped from TensorLists inside defuns.
Remove "Resource" from {Set|Get}ResourceHandleShapeAndType since the same functions are re-usable for variants.
Eager mode fix coming in a future changelist.
PiperOrigin-RevId: 213735462
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 213730668
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 213729979
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 213729750
|
| | |
| | |
| | |
| | |
| | |
| | | |
TF_FORCE_GPU_ALLOW_GROWTH environment variable.
PiperOrigin-RevId: 213728460
|
|\ \ \
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 213726710
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
VerifiedHloModule is derived from HloModule and verifies itself on destruction. This is designed to be used in HloVerifiedTestBase. This replaces the current mechanism which verifies HloModules in the TearDown method. The VerifiedHloModule approach is cleaner (less state on the test object) and more capable because these verified HLO modules can be passed to methods which require taking ownership of the module (eg, HlotestBase::Execute).
This change required some changes to the parser which enables constructing the parsed HloModule into an already allocated HloModule. Some trivial changes to HloModule are required as well.
PiperOrigin-RevId: 213718126
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 213718019
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 213716034
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
standard python `print` method, and deprecates the old `tf.Print` operator (to be removed in in v2.0).
It follows the design doc specified in https://github.com/tensorflow/community/pull/14 and additionally incorporates the community feedback and design review decisions.
This CL adds two new internal graph operators: a StringFormat operator that formats a template string with a list of input tensors to insert into the string and outputs a string scalar containing the result, and a PrintV2 operator that prints a string scalar to a specified output stream or logging level.
The formatting op is exposed at `tf.strings.Format`. A new python method is exposed at `tf.print` that takes a list of inputs that may be nested structures and may contain tensors, formats them nicely using the formatting op, and returns a PrintV2 operator that prints them. In Eager mode and inside defuns this PrintV2 operator will automatically be executed, but in graph mode it will need to be either added to `sess.run`, or used as a control dependency for other operators being executed.
As compared to the previous print function, the new print function:
- Has an API that more closely aligns with the standard python3 print
- Supports changing the print logging level/output stream
- allows printing arbitrary (optionally nested) data structures as opposed to just flat lists of tensors
- support printing sparse tensors
- changes printed tensor format to show more meaningful summary (recursively print the first and last elements of each tensor dimension, instead of just the first few elements of the tensor irregardless of dimension).
PiperOrigin-RevId: 213709924
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
properly set.
PiperOrigin-RevId: 213706101
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
make sure we run fit for the right number of steps.
PiperOrigin-RevId: 213706042
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is done by making the TapeTensor a template rather than a concrete struct.
PiperOrigin-RevId: 213700425
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 213698663
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 213693027
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
large number of debugging outputs in the INFO log that look like:
I0917 16:20:11.073992 9191 meta_optimizer.cc:334] Starting optimization for grappler item: tf_graph
I0917 16:20:11.079458 9191 meta_optimizer.cc:334] Starting optimization for grappler item: tf_graph
I0917 16:20:11.084827 12447 meta_optimizer.cc:334] Starting optimization for grappler item: tf_graph
I0917 16:20:11.089359 12447 meta_optimizer.cc:334] Starting optimization for grappler item: tf_graph
After this change those lines will simply no longer appear.
RELNOTES: n/a
PiperOrigin-RevId: 213690759
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
vectorize a MapDefun function. Also implements conversion for two ops: Cast and Unpack.
PiperOrigin-RevId: 213686720
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 213684048
|
| | | |
| | | |
| | | |
| | | | |
PiperOrigin-RevId: 213681549
|