aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* [tf.data] Minor refactoring of tf.data tests.Gravatar Jiri Simsa2018-09-27
| | | | PiperOrigin-RevId: 214781794
* [XLA] Allow the stream to be used for host-to-device transfers to be ↵Gravatar A. Unique TensorFlower2018-09-27
| | | | | | specified separately from the compute stream in ServiceRunOptions PiperOrigin-RevId: 214778267
* Update HasKwargsTest ensuring that internal checks for tests involving ↵Gravatar A. Unique TensorFlower2018-09-27
| | | | | | functools.partial are triggered. PiperOrigin-RevId: 214775194
* Update kernel evals to use new kernel signatures.Gravatar A. Unique TensorFlower2018-09-27
| | | | PiperOrigin-RevId: 214767788
* Update logic used in get_variable to populate custom_getter's kwargs.Gravatar A. Unique TensorFlower2018-09-27
| | | | | | The new implementation ensures that the 'constraints' kwarg is propagated by customer getters whose signature includes a keyworded, variable length argument dictionary, as well as those explicitly including the 'constraints' argument. PiperOrigin-RevId: 214767296
* Reduce the size of //tensorflow/tools/pip_package:simple_console_windowsGravatar A. Unique TensorFlower2018-09-27
| | | | | | | | This change reduce the size of //tensorflow/tools/pip_package:simple_console_windows's zip file from 1000027677 bytes to 47690474 bytes for a CPU build. For GPU build, it will avoid going over 4GB when multiple CUDA compatibility are specified. To fix #22390 PiperOrigin-RevId: 214764423
* Update kernel evals to use new kernel signatures.Gravatar A. Unique TensorFlower2018-09-27
| | | | PiperOrigin-RevId: 214763814
* compat: Update forward compatibility horizon to 2018-09-27Gravatar A. Unique TensorFlower2018-09-27
| | | | PiperOrigin-RevId: 214741709
* Add support for explicit fetches when creating grappler itemsGravatar A. Unique TensorFlower2018-09-27
| | | | PiperOrigin-RevId: 214732243
* Fix documentation of ready_for_local_init_op in tf.Supervisor, which ↵Gravatar A. Unique TensorFlower2018-09-27
| | | | | | mentions incorrect default value. PiperOrigin-RevId: 214731772
* Merge pull request #22076 from Intel-tensorflow:feature/daoxin/sliceGravatar TensorFlower Gardener2018-09-26
|\ | | | | | | PiperOrigin-RevId: 214726180
* \ Merge pull request #19894 from manipopopo:fix_quantizeGravatar TensorFlower Gardener2018-09-26
|\ \ | | | | | | | | | PiperOrigin-RevId: 214724610
* | | Enable constant folding for device memory tensors.Gravatar Tong Shen2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214723970
* | | Automated rollback of commit e00d7744dbab5c73e4d8ffa8a7d361f7b2dcefffGravatar Rohan Jain2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214721004
* | | Fix custom getter handling in tpu.rewrite() and friends.Gravatar A. Unique TensorFlower2018-09-26
| | | | | | | | | | | | | | | | | | It used to save the existing custom getter then overwrites the custom getter. That means the previous custom getter will never be called inside "computation". It now create a new custom getter that calls the previous custom getter. PiperOrigin-RevId: 214715720
* | | Adding per table load and retrieve ops and additional enqueue operations. ↵Gravatar Daryl Ng2018-09-26
| | | | | | | | | | | | | | | | | | Other additional refactoring. PiperOrigin-RevId: 214715083
* | | Add Mirrored distribution strategy support for new metrics with Keras and ↵Gravatar Pavithra Vijay2018-09-26
| | | | | | | | | | | | | | | | | | | | | | | | Estimator Add support for stateful metrics in model to estimator PiperOrigin-RevId: 214714322
* | | Fixes bug in tf2xla NMS implementation.Gravatar Tayo Oguntebi2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214711381
* | | Rename TocoConverter to TFLiteConverter.Gravatar Nupur Garg2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214710175
* | | Fix the eval hook to run the correct number of steps when using TPU strategyGravatar Sourabh Bajaj2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214709465
* | | Automated rollback of commit 82af048bc8c3c044c98a27b1c4c27bb62d4e4a14Gravatar Nupur Garg2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214705311
* | | Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214704902
* | | Refactor build deps by making :framework depend on :feature_util to not use ↵Gravatar A. Unique TensorFlower2018-09-26
| | | | | | | | | | | | | | | | | | the same source dependency twice. PiperOrigin-RevId: 214704620
* | | Skip SymbolicGradientOp when doing constant folding in control flow ↵Gravatar Tong Shen2018-09-26
| | | | | | | | | | | | | | | | | | | | | | | | functionalization. If we want to evaluate SymbolicGradient op in constant folding, we need to construct Device object and attach it to FunctionLibraryRuntime. In graph rewriting pass, we do not have Device object created yet; it will only be created in XlaCompiler. PiperOrigin-RevId: 214702943
* | | Fixed the bug which slows the TPU traning.Gravatar Jianwei Xie2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214702243
* | | [TF:XLA] Bump open source abseil revision to ↵Gravatar Sanjoy Das2018-09-26
| | | | | | | | | | | | | | | | | | e291c279e458761e77a69b09b129d3d1e81f1e80 PiperOrigin-RevId: 214702169
* | | Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214701926
* | | Add xlogy and xdivy op.Gravatar A. Unique TensorFlower2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214700693
* | | Removing _PerDeviceGenerator and MultiDeviceIterator from contrib now that ↵Gravatar Rohan Jain2018-09-26
| | | | | | | | | | | | | | | | | | they're moved to core. I overlooked this in the CL to move to core. PiperOrigin-RevId: 214699544
* | | internal change onlyGravatar A. Unique TensorFlower2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214698827
* | | Automated rollback of commit 844074c2a8e61b744c3de2718e1c9ea7b1d2edc2Gravatar A. Unique TensorFlower2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214693201
* | | Extract Conv2D dimensions parsing and validation into helper functions.Gravatar Eugene Zhulenev2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214691838
* | | Add densenet to the examples_pipGravatar Akshay Modi2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214685427
* | | Added a C utility to create a ServerDef proto from text representation.Gravatar A. Unique TensorFlower2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214681193
* | | Preprocess the protobuff input for parse_tensor_op.Gravatar Mihai Maruseac2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214680988
* | | Deprecate tf.manip endpoints instead of endpoints under tf.*. This change isGravatar Anna R2018-09-26
| | | | | | | | | | | | | | | | | | according to https://github.com/tensorflow/community/pull/16. PiperOrigin-RevId: 214680285
* | | Fix potential use-after-free in the training ops.Gravatar Derek Murray2018-09-26
| | | | | | | | | | | | | | | | | | | | | | | | The recent fix to a resource leak introduced a potential use-after-free, because it released a reference on a Var resource before returning a mutex* borrowed from that resource. The mutex* could therefore become garbage if the refcount concurrently dropped to zero (for example, if a concurrent `Session::Reset()` were issued). This change modifies the mutex accessing utilities to prolong the lifetime of the corresponding Var* beyond the lifetime of the returned mutex*. PiperOrigin-RevId: 214678937
* | | Update hooks for distributed jobs with a master node, to ensure thatGravatar A. Unique TensorFlower2018-09-26
| | | | | | | | | | | | | | | | | | | | | summaries are written at the correct interval for jobs with long-running evaluations. PiperOrigin-RevId: 214678483
* | | Fix Optimizer "No gradients provided" error messages to report variables ↵Gravatar Allen Lavoie2018-09-26
| | | | | | | | | | | | | | | | | | instead of internal processor objects. PiperOrigin-RevId: 214678470
* | | [TF:XLA] Fix XLA lowering of TF BroadcastTo operator.Gravatar Peter Hawkins2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214675055
* | | Rename TFLite Eager delegate -> Flex delegateGravatar Yu-Cheng Ling2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214674717
* | | [XLA] Remove use of DeconstructTuple from MakeFakeArgumentsOrDie.Gravatar Peter Hawkins2018-09-26
| | | | | | | | | | | | | | | | | | DeconstructTuple doesn't support nested tuples yet, so MakeFakeArgumentsOrDie failed if any of the arguments were tuple-shaped. But we don't really need it here anyway, just build the arguments one-by-one. PiperOrigin-RevId: 214671374
* | | Add experimental asynchronous checkpoint hook.Gravatar Russell Power2018-09-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | This triggers checkpoints in a separate thread while allowing training to continue. This can effectively parallelize checkpointing and training for workloads like TPUEstimator, where the weights are only updated after a number of device iterations. PiperOrigin-RevId: 214670991
* | | Misc. micro-optimizations in Grappler optimizers.Gravatar A. Unique TensorFlower2018-09-26
| | | | | | | | | | | | | | | | | | Make shape inference lazy in optimizers that may not trigger. PiperOrigin-RevId: 214669034
* | | Kernel signature reworking, update kernel DepthConcatenation.Gravatar A. Unique TensorFlower2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214668695
* | | Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214668499
* | | Add an experimental Java API to allow half precision for FP32 calculation.Gravatar A. Unique TensorFlower2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214668283
* | | Specify a preferred_dtype=self.dtype when converting Distribution methods' ↵Gravatar Brian Patton2018-09-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sample-like args to Tensors. After this change, you could conceivably write tfd.Normal(0., 1.).log_prob(1) The tf core distributions can't use tfp dtype_util.common_dtype, so you can't yet write tfd.Normal(0, 1). Works around an eager bug that loses precision in the presence in tf.convert_to_tensor(0.5, preferred_dtype=tf.int32) PiperOrigin-RevId: 214666222
* | | Quick fix for allowed symbols in tf contrib estimatorGravatar Katherine Wu2018-09-26
| | | | | | | | | | | | PiperOrigin-RevId: 214662826
* | | [TF] Add new internal ops _VarHandlesOp and _ReadVariablesOp.Gravatar Peter Hawkins2018-09-26
| | | | | | | | | | | | | | | | | | | | | | | | The purpose of these ops is to fix a latency problem observed for an inference benchmark. Often a inference step starts by reading the value of many (hundreds) of weights. For a resource variable, this requires a VarHandleOp and a ReadVariableOp per variable. Running hundreds of trivial ops can add hundreds of microseconds of latency to the critical path of an inference step. The inter-op latency of the executor can be hundreds of nanoseconds, which rapidly adds up. This change introduces two fused ops _VarHandlesOp and _ReadVariablesOp that allow us to read many variables in a pair of larger ops, rather than many tiny ops. PiperOrigin-RevId: 214662338