aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* [TF:XLA] Split XLA Concat Ops that fail on large sets of inputs.Gravatar A. Unique TensorFlower2018-09-07
| | | | | | GPU would fail due to having too many parameters to fit in memory because Concat's signature is variadic and can have an unlimited number of inputs. PiperOrigin-RevId: 211942734
* compat: Update forward compatibility horizon to 2018-09-07Gravatar A. Unique TensorFlower2018-09-07
| | | | PiperOrigin-RevId: 211942571
* [XLA:GPU] Clean up init thunk handling to handle arbitrary fused init valuesGravatar Benjamin Kramer2018-09-07
| | | | | | | | | I put this in as a quick hack because init_value is usually a constant, but it's really easy to construct a case where it's not. The code also became more complex because of the constant buffer work, sharing that with the fused IR emitter is a good thing. PiperOrigin-RevId: 211936337
* Added functionality of passing loss reduction as argument for RNNClassifier ↵Gravatar A. Unique TensorFlower2018-09-06
| | | | | | | | with default changed to SUM_OVER_BATCH_SIZE This would involve making changes to all existing uses of RNNClassifier to set the loss reduction argument explicitly to SUM (previous default was SUM) PiperOrigin-RevId: 211917502
* Set Vspace only one timeGravatar Akshay Modi2018-09-06
| | | | | | | | I don't believe there is currently a use-case for a different VSpace (and it doesn't seem to be controllable through any public method). If it is a usecase we want to support, it should be simple enough to add an overload of TFE_Py_TapeGradient. PiperOrigin-RevId: 211917235
* Split out HloDomainInstruction as subclass form HloInstruction.Gravatar A. Unique TensorFlower2018-09-06
| | | | PiperOrigin-RevId: 211916428
* Make num_quantiles configurable; update the epsilon value as well since ↵Gravatar A. Unique TensorFlower2018-09-06
| | | | | | epsilon controls the maximum number of quantiles generated. PiperOrigin-RevId: 211914388
* Split out HloDotInstruction as subclass from HloInstruction.Gravatar A. Unique TensorFlower2018-09-06
| | | | PiperOrigin-RevId: 211912785
* [XLA] Add support for convolution feature groups to HloCostAnalysisGravatar David Majnemer2018-09-06
| | | | | | | While there, tweak the implementation of convolution in the HLO evaluator to be a little simpler. PiperOrigin-RevId: 211911253
* Fix copy-paste error in fused_conv2d_bias_activation_op error message.Gravatar Justin Lebar2018-09-06
| | | | PiperOrigin-RevId: 211907050
* Add TF Lite-disabling variableGravatar Austin Anderson2018-09-06
| | | | PiperOrigin-RevId: 211906579
* Zero out the result buffer for strided conv backward filter for NHWC layouts.Gravatar Tim Shen2018-09-06
| | | | | | cuDNN 7.1.4 and 7.2 has non-determinisic bug if the buffer is not zeroed. PiperOrigin-RevId: 211905127
* Set meta_optimizer to use custom graph optimizers for both toggling ↵Gravatar A. Unique TensorFlower2018-09-06
| | | | | | optimizers and setting optimizer names. PiperOrigin-RevId: 211900252
* Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2018-09-06
| | | | PiperOrigin-RevId: 211899762
* Merge pull request #22026 from pengwa:fix_tf_split_docGravatar TensorFlower Gardener2018-09-06
|\ | | | | | | PiperOrigin-RevId: 211897876
* | Adding support for FeatureColumn input in Keras models. Modifies the ↵Gravatar Rohan Jain2018-09-06
| | | | | | | | | | | | | | | | Model.fit() function to support taking in dictionaries of features in. Support for functional models coming in a subsequent change. PiperOrigin-RevId: 211897153
* | Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-09-06
| | | | | | | | PiperOrigin-RevId: 211896300
* | Automated rollback of commit 24787842adfefe35f5a520313d775b14c29f143aGravatar A. Unique TensorFlower2018-09-06
| | | | | | | | PiperOrigin-RevId: 211895566
* | Add compression options to Python's TFRecordOptionsGravatar A. Unique TensorFlower2018-09-06
| | | | | | | | | | | | Plumb these through to RecordWriterOptions PiperOrigin-RevId: 211894734
* | removing a test thats timing out on tapGravatar Olivia Nordquist2018-09-06
| | | | | | | | PiperOrigin-RevId: 211892456
* | Added experimental C APIs based on eager, as a first step towards using eagerGravatar Mingsheng Hong2018-09-06
| | | | | | | | | | | | based runtime in Swift for Tensorflow. PiperOrigin-RevId: 211892308
* | failing on tsan so disablingGravatar Olivia Nordquist2018-09-06
| | | | | | | | PiperOrigin-RevId: 211892283
* | [XLA] Handle kDomain in HloCostAnalysis.Gravatar Yuanzhong Xu2018-09-06
| | | | | | | | PiperOrigin-RevId: 211891325
* | timing out test being removed from tap pending investigationGravatar Olivia Nordquist2018-09-06
| | | | | | | | PiperOrigin-RevId: 211890783
* | [tf.data] Adding support for `num_parallel_calls` to ↵Gravatar Jiri Simsa2018-09-06
| | | | | | | | | | | | | | | | `tf.data.Dataset.interleave`. Unlike the `tf.data.contrib.parallel_interleave` whose parallelism is tied to the `cycle_length` argument, the newly introduced `num_parallel_calls` argument of `tf.data.Dataset.interleave` is decoupled from the `cycle_length` argument and identifies the degree of parallelism to use for fetching output elements. PiperOrigin-RevId: 211886816
* | Fix bug that prevented iterations variable from updating when training an ↵Gravatar Katherine Wu2018-09-06
| | | | | | | | | | | | Estimator that is created from a Keras model. PiperOrigin-RevId: 211886643
* | Update LSTM paper referenceGravatar Billy Lamberta2018-09-06
| | | | | | | | | | | | Match #22072 PiperOrigin-RevId: 211884527
* | [XLA:GPU] Refactor some code for fusion output handling.Gravatar Bixia Zheng2018-09-06
| | | | | | | | | | | | | | | | | | | | | | Move routine ConstructIrArrayForOutputs to class IrEmitter so that it can be used in classes IrEmitterNested and IrEmitterUnnested. Move the code that stores the address of each individual output of a multiple output fusion to the tuple buffer of the fusion to an overload version of routine llvm_ir::EmitTuple so that we can reduce code duplication. PiperOrigin-RevId: 211884483
* | failing in asan, disablingGravatar Olivia Nordquist2018-09-06
| | | | | | | | PiperOrigin-RevId: 211883998
* | [tf.data] Fix in AutoTune prefetch buffer sizes.Gravatar Shivani Agrawal2018-09-06
| | | | | | | | PiperOrigin-RevId: 211883131
* | Python example for tutorial on post training quantization for mnist.Gravatar Raghuraman Krishnamoorthi2018-09-06
| | | | | | | | PiperOrigin-RevId: 211882134
* | Correctly tag tests that break internal testing for 1.11Gravatar Austin Anderson2018-09-06
| | | | | | | | PiperOrigin-RevId: 211879623
* | Merge pull request #22072 from vinacmg:patch-3Gravatar TensorFlower Gardener2018-09-06
|\ \ | | | | | | | | | PiperOrigin-RevId: 211878263
* | | A-normal form should not introduce temporaries for nested unpacking assignments.Gravatar A. Unique TensorFlower2018-09-06
| | | | | | | | | | | | PiperOrigin-RevId: 211876538
* | | disabling tsan in testGravatar Olivia Nordquist2018-09-06
| | | | | | | | | | | | PiperOrigin-RevId: 211875205
* | | Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-06
| | | | | | | | | | | | PiperOrigin-RevId: 211874785
* | | Simplify BUILD rule for MKL transpose op.Gravatar A. Unique TensorFlower2018-09-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is no reason for outside dependents to make a distinction between the Eigen or MKL transpose operation, as the substitution is transparent. There is also no need for transpose_op.cc itself to be compiled differently based on whether MKL is in use or not. Therefore we remove external dependencies on :mkl_transpose_op and make :transpose_op depend on it if needed (i.e., if using MKL). This is consistent with how other transparent MKL operations (e.g. matmul) are built. PiperOrigin-RevId: 211874336
* | | test is failing in asan, disabling for nowGravatar Olivia Nordquist2018-09-06
| | | | | | | | | | | | PiperOrigin-RevId: 211874311
* | | Make Image ops compatible with CondV2Gravatar Brennan Saeta2018-09-06
| | | | | | | | | | | | PiperOrigin-RevId: 211873961
* | | Remove unused parent_name argument from _UnreadVariable.__init__.Gravatar A. Unique TensorFlower2018-09-06
| | | | | | | | | | | | PiperOrigin-RevId: 211869673
* | | Enable unused "_Arg" nodes to be pruned from a function body.Gravatar Derek Murray2018-09-06
| | | | | | | | | | | | | | | | | | Previously, because "_Arg" nodes are considered to be "stateful", these nodes were unconditionally included in the seed set of nodes for pruning a function body. Since an "_Arg" node has no visible side effect, we can safely prune these, which makes small projection functions (like `lambda x, y: y`) more efficient. PiperOrigin-RevId: 211867380
* | | Internal change.Gravatar A. Unique TensorFlower2018-09-06
| | | | | | | | | | | | PiperOrigin-RevId: 211866647
* | | Merge pull request #21883 from yongtang:08252018-dockerfiles-docGravatar TensorFlower Gardener2018-09-06
|\ \ \ | | | | | | | | | | | | PiperOrigin-RevId: 211865682
* | | | Set TF_CUDNN_VERSION to 7 in windows build. This doesn't change the version ↵Gravatar Guangda Lai2018-09-06
| | | | | | | | | | | | | | | | | | | | | | | | at the runtime since in configure.py it will strip the ".0" suffix, but it makes the things cleaner and less confusing. PiperOrigin-RevId: 211860068
* | | | Remove unused and non public get_signature_def* methods from ↵Gravatar A. Unique TensorFlower2018-09-06
| | | | | | | | | | | | | | | | | | | | | | | | saved_model/signature_def_utils PiperOrigin-RevId: 211858972
* | | | Fix references to dynamic_is in generated autograph code. Remove TF import ↵Gravatar A. Unique TensorFlower2018-09-06
| | | | | | | | | | | | | | | | | | | | | | | | header from generated test examples. PiperOrigin-RevId: 211858287
* | | | [TF:XLA] Bump open source llvm revision to r341551Gravatar Sanjoy Das2018-09-06
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211857599
* | | | [tf.data] Naming parameterized tests to facilitate invoking them ↵Gravatar Jiri Simsa2018-09-06
| | | | | | | | | | | | | | | | | | | | | | | | individually and using consistent style for existing test names. PiperOrigin-RevId: 211855926
* | | | Do not have ProfilerHook output a timeline for the first step.Gravatar Reed Wanderman-Milne2018-09-06
| | | | | | | | | | | | | | | | | | | | | | | | This is because many ops take longer during the first step due to autotune. Instead, the first timeline is now outputed after N seconds/steps. PiperOrigin-RevId: 211854304
* | | | [TF:XLA] Convert bfloat16_propagation_test and hlo_cse_test to use the HLO ↵Gravatar Dimitris Vardoulakis2018-09-06
| | | | | | | | | | | | | | | | | | | | | | | | verifier. PiperOrigin-RevId: 211854249