aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/stream_executor/stream.h
Commit message (Collapse)AuthorAge
* Implement DoHostCallbackWithStatus to allow callbacks to return a statusGravatar A. Unique TensorFlower2018-08-07
| | | | PiperOrigin-RevId: 207714420
* Drop failed sub-streams during both Get and Return.Gravatar Todd Wang2018-08-03
| | | | | | | | | | | | The old code ensured that failed sub-streams would not be re-used, but had two flaws: 1) It only checked for failed sub-streams during Return. 2) It didn't actually remove the failed sub-streams from our state. The new code fixes these two flaws, and adds an extra test that explains why (1) is insufficient. PiperOrigin-RevId: 207333296
* [XLA:GPU] Use strided batched gemm instead of building pointer tables.Gravatar Benjamin Kramer2018-08-03
| | | | | | | | | | This is mostly a huge amount of plumbing just to call into the cublas functions. blasGemmStridedBatched has been available since CUDA 8.0. For autotuning we'd need cublasGemmStridedBatchedEx, which is new in CUDA 9.2 so I didn't wire that up yet. PiperOrigin-RevId: 207285707
* Ensure failed sub-streams are not re-used.Gravatar Todd Wang2018-07-25
| | | | | | | | | Streams have a monotonic state machine; if a stream encounters any error, it will remain in an error state forever. Without this change, a previously failed sub-stream will be put back on sub_streams_, only to cause the next usage of the sub-stream to trivially fail. PiperOrigin-RevId: 206112024
* [ROCm] Interface changes for pooling APIs in StreamExecutorGravatar Wen-Heng (Jack) Chung2018-07-11
| | | | | | Due to the design of MIOpen, the DNN library on ROCm platform, an instance of ScratchAllocator has to be passed into pooling routines. This commit address such interface changes and the implementation in CUDA StreamExecutor.
* Fix Windows GPU BuildGravatar A. Unique TensorFlower2018-06-28
| | | | PiperOrigin-RevId: 202260254
* Reflow comments; NFCGravatar Sanjoy Das2018-06-15
| | | | PiperOrigin-RevId: 200783258
* Merge changes from github.Gravatar Yifei Feng2018-05-24
| | | | | | | Revert #18413. Too many internal test failures due to the name scope change caused by this change. Revert #18192. Cannot use re2::StringPiece internally. Need alternative for set call. Will pull and clean this up in a separate change. PiperOrigin-RevId: 197991247
* Small polishing changes in stream executor, no functional changes.Gravatar A. Unique TensorFlower2018-05-15
| | | | PiperOrigin-RevId: 196665609
* Use parenthesis based construction instead of brace initializationGravatar Smit Hinsu2018-05-09
| | | | | | | | Updates all the construction calls for Status, ScopedActivateContext and mutexes withing stream_executor to follow the recommendation in https://abseil.io/tips/88 PiperOrigin-RevId: 196007931
* Add variants of DoBlasGemmWithAlgorithm with alpha being on device.Gravatar A. Unique TensorFlower2018-04-24
| | | | | | | This is in preparation of allowing XLA to fuse (A dot b) * alpha where alpha can be on device instead of just a constant. PiperOrigin-RevId: 194068597
* [StreamExecutor] Rename ::perftools::gputools -> ::stream_executor, part 1.Gravatar Justin Lebar2018-04-17
| | | | | | | | | | | | | | | | | | | | | | | | | | Step 1 of re-namespace'ing StreamExecutor into ::stream_executor. This moves everything inside of stream_executor/..., and leaves a namespace alias into ::perftools::gputools. The next steps will clean up users to use the new namespace. This is mostly a mechanical change, but it also includes a bunch of non-mechanical changes that ideally would be split out into separate patches. Unfortunately they all sort of need to be shoved in here for various reasons: - forward declarations need to be in the same namespace as the actual types, so we need to change all forward declarations of StreamExecutor types in this one patch. - Uses of these forward declarations need to be changed to the new namespace (or otherwise we need to add a namespace alias to the relevant header, but this is pretty ugly). - Various initialization code needs to live in StreamExecutor's "real" namespace, so all this needs to be changed. PiperOrigin-RevId: 193256128
* Support RNN profiling in StreamExecutor for CUDA GPUs.Gravatar James Qin2018-04-06
| | | | | | This change hasn't applied autotune on TF Cudnn kernels, only provides lower level support. PiperOrigin-RevId: 191919566
* [StreamExecutor] Remove ThenDoHostCallbackForTest -- it's identical to ↵Gravatar Justin Lebar2018-03-09
| | | | | | | | | | | ThenDoHostCallback. The reason this came about is: ThenDoHostCallback was once private, and ThenDoHostCallbackForTest was public. Then at some point ThenDoHostCallback became public, but the *ForTest one was never removed. PiperOrigin-RevId: 188459741
* StreamExecutor support for float64 convolutions and backprop.Gravatar Brian Patton2018-03-06
| | | | PiperOrigin-RevId: 188025477
* [StreamExecutor] Change "variance" to "inv_var" in BatchNormalizationBackward.Gravatar Justin Lebar2017-12-18
| | | | | | | | | | | | | | This parameter is not the variance of the data, but rather is 1/(sqrt(variance + epsilon). Neglecting epsilon, this is the inverse standard deviation. "inv_stddev" might be a better name, but "inv_var" is certainly better than plain "variance", and it matches nvidia's name for this parameter, which I think may override the desire for a more precise name. No functional change. PiperOrigin-RevId: 179352839
* Remove Stream::BlockHostUntilDoneWithStatus; all callers use BlockHostUntilDone.Gravatar A. Unique TensorFlower2017-12-15
| | | | PiperOrigin-RevId: 179213341
* Stream::BlockHostUntilDone now returns Status rather than bool.Gravatar A. Unique TensorFlower2017-12-13
| | | | | | | | | | | | | | | The now-deprecated Stream::BlockHostUntilDoneWithStatus remains, to facilitate a multi-CL renaming transition. Once all callers have been renamed to BlockHostUntilDone, *WithStatus will be removed. The StreamExecutor (private) method has also been renamed to BlockHostUntilDone. It's only used by Stream. The StreamExecutorInterface method will be renamed in a separate atomic CL. It's harder to perform that transition gradually, and we've already performed an atomic change previously, so we might as well fix it up in one shot. PiperOrigin-RevId: 178907807
* Add BlockHostUntilDoneWithStatus, which returns Status rather than bool.Gravatar A. Unique TensorFlower2017-12-06
| | | | | | | | | Also fixed a deadlock in Stream::BlockHostUntilDone. The problem with the original code was that it grabbed mu_ before looping over substreams, and would call CheckError with mu_ still held. But CheckError will attempt to lock mu_ in the failure case, which would deadlock. PiperOrigin-RevId: 178191634
* Support Cudnn RNN Fp16Gravatar James Qin2017-11-03
| | | | | | Relax CudnnRNNTestCompatibleRNNCells test error tolerance a bit. PiperOrigin-RevId: 174495089
* Merge changes from github.Gravatar Frank Chen2017-10-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | END_PUBLIC --- Commit ee0fdc296 authored by Gunhan Gulsoy<gunan@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add noasan tag to estimator_test PiperOrigin-RevId: 171075499 --- Commit a02116882 authored by Justin Lebar<jlebar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA:CPU] Put the HLO name in IR values that hold the HLO's value. PiperOrigin-RevId: 171075449 --- Commit 89aaac4bc authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Allow Layer.add_update() in Eager mode. PiperOrigin-RevId: 171070861 --- Commit 840dcae57 authored by Amit Patankar<amitpatankar@google.com> Committed by gunan<gunan@google.com>: Updating the install sources file with a supported configs table (#13450) * Updating the install sources file with a supported configs page. * Implementing Gunan's suggestions. * Adding GCC string to Linux compiler. * Updating the bazel/cmake column. --- Commit 89df2e336 authored by Igor Saprykin<isaprykin@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add the 'is_the_final_export' signal to Exporters. Use them in training. When the training ends, the final export is performed via `Exporter.export()` call. That final export is going to have is_the_final_export parameter being set to true. If `TrainSpec.max_steps` is `None`, then "when training ends" is undefined. We are going to train forever. In that case, `is_the_final_export` is going to be always False. I added a note about it. PiperOrigin-RevId: 171070760 --- Commit 4486b4f69 authored by Akshay Agrawal<akshayka@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Make graph_callable compatible with functions that do not return anything PiperOrigin-RevId: 171067061 --- Commit 39565c0cb authored by Martin Wicke<wicke@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Publish train_and_evaluate and associated classes. PiperOrigin-RevId: 171066379 --- Commit 3b4477000 authored by Saurabh Saxena<srbs@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Make VariantTensorData::tensors_size() const. PiperOrigin-RevId: 171063397 --- Commit 53cc63a2d authored by Dhananjay Nakrani<dhananjayn@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [part 1] Add support for int32 & int64 in RandomPoissonOp. This computes int32/int64-precision poisson samples with double precision intermediate calculations (same as it's done for `half`) respectively. part 2 will switch over python calls to new op once forward compatibility period has passed. PiperOrigin-RevId: 171058336 --- Commit 70fc9bf9b authored by Asim Shankar<ashankar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Java: Add support for loading op libraries dynamically. This change adds the equivalent of tf.load_op_library in Python to Java. (https://github.com/tensorflow/tensorflow/commit/5c7f9e316d8c7735308a217310350d416d7498cc was required to make this possible) Though, TensorFlow.loadLibrary() is likely to fail on Windows as symbols required by custom op libraries (those exported by the tensorflow_framework library) are not exported by the monolithic JNI library yet. This should help with #10454 and #13476 PiperOrigin-RevId: 171054707 --- Commit e7c53698e authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Internal cleanup PiperOrigin-RevId: 171053770 --- Commit cc8ee6c0f authored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fast path for tf.conj when it should be pass-through. PiperOrigin-RevId: 171053662 --- Commit c41dbc3c1 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Adding TF Boosted trees regression example on boston dataset, minor fix for mnist example. PiperOrigin-RevId: 171052367 --- Commit d66e77f7c authored by Mustafa Ispir<ispir@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Added get variable utils to tf.estimator.Estimator. PiperOrigin-RevId: 171052121 --- Commit 083bd5dde authored by Asim Shankar<ashankar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Java: Add support for loading op libraries dynamically. This change adds the equivalent of tf.load_op_library in Python to Java. (https://github.com/tensorflow/tensorflow/commit/5c7f9e316d8c7735308a217310350d416d7498cc was required to make this possible) Though, TensorFlow.loadLibrary() is likely to fail on Windows as symbols required by custom op libraries (those exported by the tensorflow_framework library) are not exported by the monolithic JNI library yet. This should help with #10454 and #13476 PiperOrigin-RevId: 171054707 --- Commit 2fe6cf285 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Internal cleanup PiperOrigin-RevId: 171053770 --- Commit 15155493b authored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fast path for tf.conj when it should be pass-through. PiperOrigin-RevId: 171053662 --- Commit 6c954d0b3 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Adding TF Boosted trees regression example on boston dataset, minor fix for mnist example. PiperOrigin-RevId: 171052367 --- Commit ad69076eb authored by Mustafa Ispir<ispir@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Added get variable utils to tf.estimator.Estimator. PiperOrigin-RevId: 171052121 --- Commit 3cf41b2ed authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Test save/restore variable from graph_callable. PiperOrigin-RevId: 171051237 --- Commit cf17ec96e authored by Yangzihao Wang<yangzihao@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add V2 versions of output window size computation functions for convolution. These V2 versions take arbitrary dilation rates. In preparation for the support of native cudnn dilated convolution. PiperOrigin-RevId: 171048878 --- Commit 491584ff4 authored by Asim Shankar<ashankar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: eager: Always run dataset iterator operations on CPU. It has no kernels for other devices. With an explicit "tf.device()" before invoking the kernel we ensure that Iterator.next() functions even when placed inside a: with tf.device("/device:GPU:0") PiperOrigin-RevId: 171048558 --- Commit 3b354016e authored by Igor Saprykin<isaprykin@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Rename SavedModelExporter to LatestExporter. PiperOrigin-RevId: 171048345 --- Commit 943c6d7af authored by Jianwei Xie<xiejw@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: errors out if the evaluator has task id > 0. PiperOrigin-RevId: 171047652 --- Commit 8c9ef4466 authored by Mark Heffernan<meheff@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Expand set of 64-bit type tests in LocalClientExecuteTest.ShapeBufferToLiteralConversion64bit and factor out into their own test. PiperOrigin-RevId: 171043047 --- Commit cc521eb06 authored by Benoit Steiner<bsteiner@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Place all the nodes created by the trivial_test_graph_input_yielder PiperOrigin-RevId: 171045878 --- Commit 9b9301240 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA:CPU] Factor out parallel task assignment from cpu parallelization prep (no functional changes). PiperOrigin-RevId: 171045137 --- Commit 558d878d9 authored by Allen Lavoie<allenl@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: TFTS: Move normalization to the base class, start using it for state space models Preivously, state space models adjusted their priors based on the data (e.g. setting initial variances to match sample variance) but did not normalize the data itself. When the data has a rather extreme scale, this runs into precision issues. After this CL, state space models will first normalize, then use adjusted statistics on top of that normalization to estimate initial observation/transition noise. Also fixes an issue where start-of-series statistics were incorrect for the first batch (which only shows up with large input scales). PiperOrigin-RevId: 171044863 --- Commit 266f77156 authored by Mark Heffernan<meheff@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Expand set of 64-bit type tests in LocalClientExecuteTest.ShapeBufferToLiteralConversion64bit and factor out into their own test. PiperOrigin-RevId: 171043047 --- Commit c9915d1a2 authored by Shanqing Cai<cais@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [tf-signal] Fix pip tests by including test_util in signal_py PiperOrigin-RevId: 171042732 --- Commit f8550f4e9 authored by Mark Heffernan<meheff@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Expand set of 64-bit type tests in LocalClientExecuteTest.ShapeBufferToLiteralConversion64bit and factor out into their own test. PiperOrigin-RevId: 171043047 --- Commit 87dc532cd authored by Shanqing Cai<cais@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [tf-signal] Fix pip tests by including test_util in signal_py PiperOrigin-RevId: 171042732 --- Commit 0578dd65e authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add more debugging output for XLA send/recv. PiperOrigin-RevId: 171041978 --- Commit 23992bb09 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Several minor documentation fixes. PiperOrigin-RevId: 171038610 --- Commit af14ed3f3 authored by Jianwei Xie<xiejw@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Some docstring twists and argument validations. PiperOrigin-RevId: 171037949 --- Commit 6b90a65f6 authored by Mark Heffernan<meheff@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Remove "hybrid" HloModuleConfig option. The option was used to generate executables which only generated the array values of tuple-shaped outputs, not the tuple index tables.. With cl/170133015, ShapedBuffers which hold the computation output now have materialized tuples with these index tables so this option is no longer desired or necessary. No functional change. Just cleanup. PiperOrigin-RevId: 171035738 --- Commit 41a0264ab authored by Mustafa Ispir<ispir@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Added utilities to make global step reading deterministic. Used them in Estimator. Enabled/Fixed some tests. PiperOrigin-RevId: 171035291 --- Commit 9d7843c0a authored by Skye Wanderman-Milne<skyewm@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add optional unused_input_map_keys output param to ImportGraphDef This is a more general feature than that in the Python importer, which raises an exception if the input map contains unused names. PiperOrigin-RevId: 171029316 --- Commit 4f10a6597 authored by Mark Heffernan<meheff@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add vlogging of HloModule before and after fusion. PiperOrigin-RevId: 171029054 --- Commit 9e658545a authored by Reed Wanderman-Milne<reedwm@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Document what dtype tf.image.resize_images returns. For consistency, tf.image.resize_images now will always return a float32 when method != ResizeMethod.NEAREST_NEIGHBOR. Before, it returned the same dtype as its input if it could be determined statically that the height and width would not be changed. PiperOrigin-RevId: 171028825 --- Commit 4d70239f0 authored by Jianwei Xie<xiejw@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Replace the contrib FC with core FC in canned Estimator docstring. PiperOrigin-RevId: 171027602 --- Commit 6a1b867ff authored by Jianwei Xie<xiejw@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Adds the docstring with details for tf.estimator.train_and_evaluate PiperOrigin-RevId: 171027527 --- Commit 7209c1602 authored by Peter Hawkins<phawkins@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [TF:XLA] Mark IdentityN as CompilationOnly(). PiperOrigin-RevId: 171025171 --- Commit 8e22eb874 authored by FAIJUL<md.faijul.amin@intel.com> Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>: Eigen BiasAdd and BiasAddGrad Fix for NCHW Format. (#13158) --- Commit 7db7a890c authored by Jingyue Wu<jingyue@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [Grappler] Move InferOutputShapes to GraphProperties. So it can be used by other optimizers. No functional changes. PiperOrigin-RevId: 171010106 --- Commit 2114fd51e authored by Peter Hawkins<phawkins@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [TF:XLA] Improve numerical stability of SoftPlus. PiperOrigin-RevId: 171003559 --- Commit 727d6270f authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fix race condition in TensorForest tree traversal. PiperOrigin-RevId: 170990425 --- Commit d016cb020 authored by Suharsh Sivakumar<suharshs@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fix c++ gradients issue where multiple dependent outputs result in incorrect answer. The issue is that we incorrectly calculate the pending num_expected_backprops for outputs nodes when one output transitively depends on another. this is because we use output nodes as an indicator of when we need to end our traversal. Instead we should only use output nodes that don't transitively get consumed by other output nodes as end indicators for our traversal. This change implements that fix. Fixes #13190 PiperOrigin-RevId: 170971937 --- Commit 5405f3bd7 authored by gunan<gunan@google.com> Committed by Frank Chen<frankchn@gmail.com>: Fix tf-signal tests on pip packages. (#13483) --- Commit f9f037c1c authored by Eugene Brevdo<ebrevdo@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Bugfix to LSTMBlockCell and friends: clipping is off by default. * Rename broken API argu clip_cell boolean to cell_clip value. * Make default no clipping. PiperOrigin-RevId: 170960975 --- Commit bfaaefa9e authored by Frank Chen<frankchn@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Update APIs for TPU Cluster Resolver to remove the custom API definition and instead use a standard definition file stored in GCS. PiperOrigin-RevId: 170960877 --- Commit c31c118a3 authored by Ian Langmore<langmore@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Extend tf.contrib.bijector API to handle some non-injective transforms. AbsoluteValue Bijector added to contrib/distributions/bijectors/ TransformedDistribution udpated to handle some non-injective transforms. PiperOrigin-RevId: 170960054 --- Commit 664dd0859 authored by Frank Chen<frankchn@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Disable cluster_function_library_runtime_test on Mac OS as it is currently failing with an Unimplemented error PiperOrigin-RevId: 170958505 --- Commit 6af7ab97a authored by Mahmoud Abuzaina<mahmoud.abuzaina@intel.com> Committed by gunan<gunan@google.com>: MKL-DNN open source integration. (#13135) * MKL-DNN conv and build integration * Adding new files that were mistakenly missing from the PR * Minor change in the pip package build file * Added missing #include * Fixed a linking failure when running the bazel test * Fixing BUILD file format * Using -fopenmp for building mkl_dnn only when running on linux * Fixing build rule attribute value * Removing unnecessary deps from mkl test rule * Removed deps on mkl-dnn when not building with --config=mkl --- Commit 93fa1af76 authored by Akshay Agrawal<akshayka@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Make graph_callable, defun tf_decorators PiperOrigin-RevId: 170948777 --- Commit b39525785 authored by Mustafa Ispir<ispir@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Added comment re:behavior of listener in case of multiple saver hooks. PiperOrigin-RevId: 170946536 --- Commit de14fcbb6 authored by Igor Saprykin<isaprykin@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Support evaluation in `_TrainingExecutor.run_master()`. This CL aims to address the following TODO: # TODO(b/66720832): Once listener API is added into Estimator.train, the # eval and export process should be wrapped as a listener and passed to # _start_distributed_training. The expected behavior should be # 1. The export is invoked after each intermediate evaluation. # 2. The evaluation and export should be invoked correctly at the end of # training. This should be fine if the listener works as intended (it will # send the `after_save` signal for the final ckpt saving). 1. is achieved as follows: a. saving_evaluators are added to the CheckpointSaverHook's listeners inside the Estimator. b. MonitoredSession calls after_run() of CheckpointSaverHook, which in turn calls after_save on the listeners. 2. is achieved in a similar way, but when MonitoredSession calls .end() on CheckpointSaverHook. PiperOrigin-RevId: 170945961 --- Commit d4ea993ca authored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Removes unnecessary eager-mode call to convert_to_tensor in record_gradient. PiperOrigin-RevId: 170944265 --- Commit add6d2d03 authored by RJ Ryan<rjryan@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [tf-signal] Use tf.spectral.dct in mfccs_from_log_mel_spectrograms instead of a private implementation. PiperOrigin-RevId: 170943986 --- Commit b959da92f authored by Jiri Simsa<jsimsa@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fixing CPU implementation of parallel_stack for tensors with non-zero rank. PiperOrigin-RevId: 170942814 --- Commit 4cf61262a authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Improve TFGAN documentation. PiperOrigin-RevId: 170940188 --- Commit 0068086b9 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Introduce `tf.data` namespace. PiperOrigin-RevId: 170939033 --- Commit 0c8dbc1fd authored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: matmul uses shape_tuple internally PiperOrigin-RevId: 170938790 --- Commit ad37fa81f authored by Igor Saprykin<isaprykin@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Refactor ExportStrategies into Exporters. This design eliminates some indirection. Instead of combining an `export_fn` with `make_export_strategy` call to arrive at an ExportStrategy that is going to call the supplied `export_fn` inside its `export` call with Exporters one just defines the `export` call in an Exporter. PiperOrigin-RevId: 170936640 --- Commit b925f8553 authored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fast-path for EagerTensorBase.dtype PiperOrigin-RevId: 170933005 --- Commit 08e266d9b authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Pass activity_regularizer to __init__ instead of using the (now deprecated) property setter. PiperOrigin-RevId: 170932807 --- Commit b002c8b7d authored by Jingyue Wu<jingyue@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [Grappler] Fold chains of reshapes. Reshape(Reshape(input, shape1), shape2) is equivalent to Reshape(input, shape2). PiperOrigin-RevId: 170932278 --- Commit 075d1d13b authored by horance<horance@aliyun.com> Committed by Frank Chen<frankchn@gmail.com>: remove warning for forward decl (#13459) --- Commit 931609fcf authored by Ryohei Kuroki<ryohei.kuroki@gmail.com> Committed by Frank Chen<frankchn@gmail.com>: Remove unnecessary specification for default kernel name (#13465) --- Commit 94463f521 authored by Akshay Agrawal<akshayka@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Preserve target function signature in custom_gradient decorator PiperOrigin-RevId: 170931715 --- Commit 681056636 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Internal change to simplify prediction ops. - it no longer returns predictions_no_dropout, which is mostly for debugging purpose. - as a consequence, MultipleAdditiveTrees::Predict() doesn't return prediction_no_dropout, and it accept trees_to_include indexes intead of trees_to_drop indexes. PiperOrigin-RevId: 170926422 --- Commit d6e963b82 authored by Asim Shankar<ashankar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: SYCL: Fix build breakage introduced in https://github.com/tensorflow/tensorflow/commit/f0e8c545e0196b8b48ce0ad0f116df97d980d1f1 Fixes #13350 PiperOrigin-RevId: 170923862 --- Commit 5123f2971 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Internal cleanup. PiperOrigin-RevId: 170922297 --- Commit d0c76cd18 authored by Igor Saprykin<isaprykin@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Handle the absence of a fresh eval checkpoint in `run_local`. It is ~unexpected condition for an eval checkpoint to not be available after a train call to the estimator. There is a corner case when it is possible, but that's going to be resolved soon. This case is handled for continuous (distributed) evaluation differently. Instead of erroring out, we skip evaluation runs. That behavior is captured in the `test_skip_evaluation_due_to_ckpt` test. PiperOrigin-RevId: 170919925 --- Commit 435b31b9f authored by Gunhan Gulsoy<gunan@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: BEGIN_PUBLIC Automated g4 rollback of changelist 170892257 PiperOrigin-RevId: 171321707
* Add float16 support to tf.nn.fused_batch_norm on the GPU.Gravatar Reed Wanderman-Milne2017-09-27
| | | | | | Scale, offset, mean, and variance must still be float32 if the input is float16. PiperOrigin-RevId: 170239448
* Merge changes from github.Gravatar Shanqing Cai2017-09-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | END_PUBLIC --- Commit 1e1b3d902 authored by Pete Warden<pete@petewarden.com> Committed by gunan<gunan@google.com>: Changed output directory for Pi CI build to fix permissions problem with nightlies (#13257) * Fix for RTLD_GLOBAL breakage of Pi builds, and removed Eigen version change for Pi that's no longer needed * Fixed Pi Zero OpenBLAS build problems and tidied up directories used * More robust checks in Pi build script * Changed output directory for Pi CI build to fix permissions problem --- Commit fe3a2e65c authored by Yan Facai (???)<facai.yan@gmail.com> Committed by drpngx<drpngx@users.noreply.github.com>: check invalid string type for dest_nodes in extract_sub_graph (#13057) * BUG: check str type * TST: add unit test * CLN: remove list check * CLN: use warning * CLN: 2 indent * CLN: raise TypeError if not list * CLN: check string only --- Commit 225ab7629 authored by Jean Wanka<jm.wanka@gmail.com> Committed by Jean Wanka<jm.wanka@gmail.com>: Fix polynomial decay with cycle for global step=0 For polynomial decay with cycle=True the learning rate at step 0 becomes NaN, because in the process of calculating it we devide by 0. This change should fix it, by setting the multiplier for the decay steps to one for global_step=0. --- Commit 286f57061 authored by Bjarke Hammersholt Roune<broune@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Make Service::TransferToClient not attempt to manipulate the literal when the transfer failed, preventing a crash and allowing the caller to see the reason for the failed transfer. PiperOrigin-RevId: 169770126 --- Commit e0501bc4d authored by Yong Tang<yong.tang.github@outlook.com> Committed by Shanqing Cai<cais@google.com>: Fix GRUBlockCell parameter naming inconsistency (#13153) * Fix GRUBlockCell parameter naming inconsistency This fix tries to fix the issue in 13137 where parameter `cell_size` is used instead of `num_units`. This is inconsistent with other RNN cells. This fix adds support of `num_units` while at the same time maintains backward compatiblility for `cell_size`. This fix fixes 13137. Signed-off-by: Yong Tang <yong.tang.github@outlook.com> * Add `@deprecated_args` for 'cell_size' in `GRUBlockCell` This commit adds `@deprecated_args` for 'cell_size' in `GRUBlockCell` Signed-off-by: Yong Tang <yong.tang.github@outlook.com> * Address review comment Signed-off-by: Yong Tang <yong.tang.github@outlook.com> --- Commit 02a2eba05 authored by Pete Warden<pete@petewarden.com> Committed by gunan<gunan@google.com>: Fix for RTLD_GLOBAL breakage of Pi builds, and removed Eigen version change that's no longer needed (#13251) * Fix for RTLD_GLOBAL breakage of Pi builds, and removed Eigen version change for Pi that's no longer needed * Fixed Pi Zero OpenBLAS build problems and tidied up directories used * More robust checks in Pi build script --- Commit 8ef722253 authored by Sanjoy Das<sanjoy@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Remove a redundant setName. The EmitComputation should have emitted a function with the right name, so use a CHECK instead. PiperOrigin-RevId: 169764856 --- Commit 1b94147dc authored by Neal Wu<wun@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fix broken GitHub links in tensorflow and tensorflow_models resulting from The Great Models Move (a.k.a. the research subfolder) PiperOrigin-RevId: 169763373 --- Commit b1ada5f0c authored by Justine Tunney<jart@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fix TensorBoard python -m invoke in docs PiperOrigin-RevId: 169758752 --- Commit 2957cd894 authored by Mustafa Ispir<ispir@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Local run option of estimator training. PiperOrigin-RevId: 169756384 --- Commit 1dc2fe7ac authored by Gunhan Gulsoy<gunan@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: BEGIN_PUBLIC Automated g4 rollback of changelist 166264198 PiperOrigin-RevId: 169998124
* Add int8 version of fused_conv2d_bias_activation operator for the forward phase,Gravatar A. Unique TensorFlower2017-09-06
| | | | | | and support side_input and scaling parameters in float and int8 versions. PiperOrigin-RevId: 167763219
* Automated g4 rollback of changelist 166276461Gravatar A. Unique TensorFlower2017-08-24
| | | | PiperOrigin-RevId: 166305887
* Add int8 version of fused_conv2d_bias_activation operator for the forward phase,Gravatar A. Unique TensorFlower2017-08-23
| | | | | | and support side_input and scaling parameters in float and int8 versions. PiperOrigin-RevId: 166276461
* Make tensorflow::mutex implement a shared (reader/writer) lock, usingGravatar A. Unique TensorFlower2017-08-17
| | | | | | open source nsync library. PiperOrigin-RevId: 165633487
* Let GetBlasGemmAlgorithms() always return true.Gravatar Yangzihao Wang2017-07-21
| | | | PiperOrigin-RevId: 162748507
* Automated g4 rollback of changelist 162423171Gravatar A. Unique TensorFlower2017-07-18
| | | | PiperOrigin-RevId: 162437318
* Add autotuning code for matmul operator.Gravatar Yangzihao Wang2017-07-18
| | | | | | Currently it is turned off by default. PiperOrigin-RevId: 162423171
* Support float64 CuDNN RNNGravatar James Qin2017-07-18
| | | | PiperOrigin-RevId: 162412879
* Add support for int8 x int8 -> int32 matrix multiplication via cublasGemmEx ↵Gravatar A. Unique TensorFlower2017-07-06
| | | | | | to stream_executor. PiperOrigin-RevId: 161137741
* [SE] Support alpha scale in cudnnTransformTensorGravatar A. Unique TensorFlower2017-06-20
| | | | PiperOrigin-RevId: 159578357
* TransformTensor supports NCHW_VECT_C layout and int8 data type.Gravatar Jingyue Wu2017-06-12
| | | | | | | | | | | o Add new DataType kInt8 o Add new DataLayout kBatchDepthYX4 and new FilterLayout kOutputInputYX4, both of which map to CUDNN_TENSOR_NCHW_VECT_C o Change (Then|Do)TransformTensor to take input and output element types. o Add new tensor format FORMAT_NCHW_VECT_C, and change the utility functions in tensor_format.h to work with the new format. PiperOrigin-RevId: 158806412
* Pass int parameter by value, not by const referenceGravatar A. Unique TensorFlower2017-06-06
| | | | PiperOrigin-RevId: 158142102
* [SE] Add cudnnTransformTensor to StreamExecutor.Gravatar Jingyue Wu2017-06-05
| | | | PiperOrigin-RevId: 158062553
* Add functional support for cudnnConvolutionBiasActivationForward().Gravatar Yangzihao Wang2017-06-01
| | | | PiperOrigin-RevId: 157788425
* Merge changes from github.Gravatar A. Unique TensorFlower2017-04-04
| | | | Change: 152200430
* [XLA] [StreamExecutor] Tune GEMMs when possible.Gravatar Justin Lebar2017-03-02
| | | | | | | | | | | | | cublas 8 adds the cublasGemmEx function, which lets you specify an explicit "algorithm" for the computation. This functions as an opaque tuning hint to cublas. This patch adds support for cublasGemmEx to StreamExecutor, and wires up XLA's GemmThunk to use the new function. This patch does not add GEMM autotuning support in TensorFlow proper, only XLA. Change: 149068961
* [StreamExecutor] Minor comment cleanups.Gravatar Justin Lebar2017-03-02
| | | | Change: 149066697
* [XLA:GPU] Cache GPU substreams across executionsGravatar A. Unique TensorFlower2017-03-02
| | | | Change: 149063035
* Add options argument for DNN activationGravatar A. Unique TensorFlower2017-01-24
| | | | | This is useful for platform-dependent functionality. Change: 145432435
* Add convolve quantized ops to StreamExecutor APIGravatar A. Unique TensorFlower2017-01-19
| | | | Change: 144996696
* Add several operations to the StreamExecutor APIGravatar A. Unique TensorFlower2017-01-17
| | | | | No implementations are yet provided for these operations. Change: 144743665
* Add the interface in steam executor to call cuDNN batch normalization functions.Gravatar Yao Zhang2016-09-15
| | | | Change: 133345765
* Add stream-executor changes to enable Cudnn fused LSTM/RNN support.Gravatar Xiaoqiang Zheng2016-08-19
| | | | Change: 130770287
* Roll-forward of "Local Response Normalization GPU support via Stream Executor."Gravatar RJ Ryan2016-07-13
| | | | | Move AsDeviceMemory function into a StreamExecutorUtil class, not a GPUUtil one, since it's independent of GPUs. Make lrn_op use the new version of that function. Change: 127289319
* Automated rollback of change 127123966Gravatar Vijay Vasudevan2016-07-11
| | | | Change: 127141513
* Local Response Normalization GPU support via Stream Executor.Gravatar RJ Ryan2016-07-11
| | | | | Includes port of internal Stream Executor support for cuDNN normalization. Change: 127123966
* Improve convolution autotune process. The max batch size VGG model can handleGravatar Xiaoqiang Zheng2016-06-21
| | | | | | | | | | | | improves by 56%: from 148 to 231 in the forward-backward pass. Support both the fastest algorithm, and fall back to the fastest algorithm without using any scratch memory, if the first algorithm fails scratch memory allocation. Soumith's conv-benchmarks stay the same before and after this change. But now it can run with bigger batch size. Change: 125484122