aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
...
* Adds float64 support for avg pool and its gradient.Gravatar Brian Patton2018-03-20
| | | | | | Eigen NumTraits is modified to directly use std::numeric_limits, which resolves a broken test caused by inconsistency between the host and devices values of Eigen::NumTraits<double>::highest(). This returns +inf on device, due to third_party/eigen3/Eigen/src/Core/util/Meta.h, and __DBL_MAX__ (1.7976931348623157e+308) on host, making the behavior for doubles (on device) inconsistent with both the behavior of floats Eigen::NumTraits<float>::highest() and the behavior of std::numeric_limits<double>::max() PiperOrigin-RevId: 189731521
* Don't spin in a loop when we're not waiting on any GPU events.Gravatar Justin Lebar2018-03-20
| | | | PiperOrigin-RevId: 189719711
* - Added support for data to be specified in RNN classes as large tensors ↵Gravatar A. Unique TensorFlower2018-03-20
| | | | | | | | | with time folded into the batch dimension instead of lists of tensors - Significant refactoring of RNN classes - Fixed a bunch of issues in the LayerCollection docstrings, especially around the 'reuse' argument. PiperOrigin-RevId: 189716331
* Fix bugGravatar A. Unique TensorFlower2018-03-20
| | | | PiperOrigin-RevId: 189712233
* Fix some edge cases around scalar indices in the gather expanderGravatar Sanjoy Das2018-03-19
| | | | | | | | | | | | | | | | | | | | | | | | I discovered these when changing the tf2xla bridge to directly emit gather operations. - DeScalarizeGatherIndices was assuming that gather_indices must be of at least rank 1. Fix this to be more general. - We were passing in the wrong version of gather indices to ExpandFirstDimIntoNDims. We don't strictly need to pass in transposed_gather_indices (since if transposed_gather_indices is rank 1 then the transpose has to be an identity transpose), passing in descalarized_gather_indices would also have been fine, but transposed_gather_indices seems more uniform. - ExpandGatherDimsInAccumulator was assuming that gather_indices must be of at least rank 1 (by calling CollapseFirstNDims). Fix this to be more general. - We were trying to go through with emitting zero sized gather operations. I don't think it is worth dealing with all of the edge cases this would expose so now we just punt to ZeroSizedHloElimination. PiperOrigin-RevId: 189696444
* Predictions have to be updated for exported output signaturesGravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189694707
* Added infeed support for experimental C APIs associated with TPU graph rewrite.Gravatar Mingsheng Hong2018-03-19
| | | | | | | | | | | | | | | | This initial design of the C API is different from (and mostly higher level than) the python API counterparts for infeed, in that the python API has explicit graph construction APIs for generating infeed enqueue/dequeue ops (e.g. split_inputs_and_generate_enqueue_ops() and generate_dequeue_op()), while the C API takes an input graph and redirects all input nodes to feed the infeed enqueue. One requirement/restriction is that the input nodes in the TF graph (e.g. Placeholder) must specify their tensor shapes, for infeed enqueue and dequeue nodes to properly compile with XLA. The API for more general shape support will be designed and implemented later. PiperOrigin-RevId: 189693028
* Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189690096
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189688675
* Add `ostream<<` to `tensorflow::TensorShapeBase`.Gravatar A. Unique TensorFlower2018-03-19
| | | | | Reason: Allow `LOG(ERROR) << shape` (currently disallowed). PiperOrigin-RevId: 189687162
* Quantize bypasses after activations.Gravatar Suharsh Sivakumar2018-03-19
| | | | PiperOrigin-RevId: 189686219
* Always imports the contrib summary ops when importing tensorflow.Gravatar Alexandre Passos2018-03-19
| | | | | | Fixes #17802 PiperOrigin-RevId: 189684619
* Adds final partial batch support for TPUEstimator.predict.Gravatar Jianwei Xie2018-03-19
| | | | PiperOrigin-RevId: 189683528
* Apply output_min/output_max to the result in the NEON implementation of Add ↵Gravatar A. Unique TensorFlower2018-03-19
| | | | | | | | operator. Both non-NEON and reference implementation have this, but it's missing from NEON version. PiperOrigin-RevId: 189682984
* Handle non-broadcastables shapes in eager assert_equalGravatar Igor Ganichev2018-03-19
| | | | | | | | | | | Before this change assert_equal would fail when producing an error message for non-equal shapes because array_ops.boolean_mask only works for equal shapes. This part of the error message is fairly confusing in presence of non-equal shapes. This change removes it. PiperOrigin-RevId: 189682518
* Avoid attaching fqn annotations to live values that don't have a `__name__`.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189680937
* Disable freeze_bn_delay by default.Gravatar Suharsh Sivakumar2018-03-19
| | | | PiperOrigin-RevId: 189680481
* Update GraphProperties commentsGravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189680477
* Make L2 norm computation more stable.Gravatar Surya Bhupatiraju2018-03-19
| | | | | | Avoids the potentially numerically instable square root in the linalg_ops.norm() function because we 'undo' that operation with a math_ops.square() operation anyway. PiperOrigin-RevId: 189677716
* Export tf.GradientTapeGravatar Asim Shankar2018-03-19
| | | | | | | | tf.GradientTape can be used both for eager execution and graph construction to compute gradients (unlike tf.gradients, which works only for graph construction). PiperOrigin-RevId: 189676004
* Support general permutation.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189675019
* Add docstring pointing to tf.contrib.quantize.Gravatar Suharsh Sivakumar2018-03-19
| | | | PiperOrigin-RevId: 189672549
* Register gradient for argmin (cf. #15278).Gravatar Martin Wicke2018-03-19
| | | | PiperOrigin-RevId: 189671974
* add option to save trace table to model directory's profile plugin ↵Gravatar A. Unique TensorFlower2018-03-19
| | | | | | subdirectory. PiperOrigin-RevId: 189671290
* Standardize bib references and Examples subsection in docstrings.Gravatar Dustin Tran2018-03-19
| | | | | | | | | | Recipe: + Write a #### Examples subsection below Args/Returns/Raises to illustrate examples. If the docstring's last line is a ``` closing a code snippet, add an empty line before closing the docstring with """. This properly displays the code snippet. + Write a #### References subsection at the bottom of any docstring with citations. Enumerate all references in alphabetical order. Individual bibentries use ICLR?s bibliography style, which borrows from icml2010.bst and which itself borrows from plainnl.bst. Add a link to the paper if the publication is open source (ideally, arXiv). PiperOrigin-RevId: 189670932
* Make _USE_C_API = True and_USE_C_SHAPES = False work with import_graph_def.Gravatar Skye Wanderman-Milne2018-03-19
| | | | | | | Without this change, shapes wouldn't be correctly computed for operations created via import_graph_def. PiperOrigin-RevId: 189670312
* Add a clif build rule for saved_model.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189669509
* Improve flatbuffer verification.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189668634
* [tf.data] Combine implementations of FlatMapDataset, InterleaveDataset and ↵Gravatar Derek Murray2018-03-19
| | | | | | ParallelInterleaveDataset. PiperOrigin-RevId: 189667086
* Fix test failureGravatar Yuefeng Zhou2018-03-19
| | | | PiperOrigin-RevId: 189666053
* Automated g4 rollback of changelist 188440916Gravatar Lukasz Kaiser2018-03-19
| | | | PiperOrigin-RevId: 189664854
* Disable lstm test in generated_example due to state non-definitive init.Gravatar Zhixian Yan2018-03-19
| | | | PiperOrigin-RevId: 189654943
* A few changes to improve the real data performance:Gravatar Xiaoqiang Zheng2018-03-19
| | | | | | | | | | | | * Turn off the force_gpu_compatible by default. * Move the cast operator within the processing operator. * Have the map_and_batch operator produce gpu_compatible output. * Add an option to produce fp16 tensors for network transfer by default. On DGX-1 V100, with resnet50, I got 5050 images/sec on real data, 5395 images/sec on synthetic data. With trivial model, I got 13000+ images/sec on real data. PiperOrigin-RevId: 189653575
* Run flatbuffer verifier before reading a TFLITE file into toco.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189649236
* Use fully-qualified function names and avoid the need to replace attributes.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189648496
* Allowing the FunctionBufferingResource to be passed in thread_pool_size=0 in ↵Gravatar Rohan Jain2018-03-19
| | | | | | which case we wouldn't pass in a runner to the FLR::Run call and rely on the underlying device threadpool instead. PiperOrigin-RevId: 189648051
* Maintain an updateable map of devices in the eager context.Gravatar Akshay Modi2018-03-19
| | | | PiperOrigin-RevId: 189646358
* Fix build breakage with downloadable clang and -fopenmp.Gravatar Ilya Biryukov2018-03-19
| | | | | | | | | | | By disabling openmp when building with clang. If we want to enable openmp with clang, we'll probably have to have libomp as an explicit dependency. This fixes a breakage found by OS CI: https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu-clang/215/ PiperOrigin-RevId: 189644968
* TFLite Delegate: Add an `allow_dynamic_tensors` parameter.Gravatar Yu-Cheng Ling2018-03-19
| | | | PiperOrigin-RevId: 189641833
* Enable stack push removal optimization by default.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189641729
* Turned on gradient optimization by defaultGravatar Benoit Steiner2018-03-19
| | | | PiperOrigin-RevId: 189641300
* Add a helper that allows constructing simple expression ASTs from string. ↵Gravatar A. Unique TensorFlower2018-03-19
| | | | | | Useful to simplify the representation of composite symbols, e.g. 'py2tf.foo'. PiperOrigin-RevId: 189638901
* Automated g4 rollback of changelist 189416074Gravatar Derek Murray2018-03-19
| | | | PiperOrigin-RevId: 189634491
* Moves TFE_Executor to tensorflow::EagerExecutor in ↵Gravatar Alexandre Passos2018-03-19
| | | | | | tensorflow/core/common_runtime/eager PiperOrigin-RevId: 189634404
* TFE: Fix bug encountered when using `optimizer.apply_gradients` in a defun.Gravatar Akshay Agrawal2018-03-19
| | | | | | | | | | | Prior to this change, `Optimizer` assumed that `not context.executing_eagerly()` implied that every variable that it was to update was constructed in a graph. That assumption is incorrect --- TensorFlow functions can mutate variables captured from or lifted into the eager context. As such, this change removes that assumption. Fixes #17792 PiperOrigin-RevId: 189633630
* Add bfloat16 support for CPU ops.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189631659
* Extract GraphOptimizer{Stage,Context}, and use it as a base classGravatar A. Unique TensorFlower2018-03-19
| | | | | | in ArithmeticOptimizer. PiperOrigin-RevId: 189628227
* Checkpointable: Small cleanup making better use of NewCheckpointReader.Gravatar Allen Lavoie2018-03-19
| | | | PiperOrigin-RevId: 189627956
* Add a map from TPU core id to name to TfOpStats.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189620850
* Do not use SparseMatmul to for bfloat16 as Matmul is already supported.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189614197