aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* update docGravatar A. Unique TensorFlower2018-05-15
| | | | PiperOrigin-RevId: 196670274
* Small polishing changes in stream executor, no functional changes.Gravatar A. Unique TensorFlower2018-05-15
| | | | PiperOrigin-RevId: 196665609
* internal changeGravatar A. Unique TensorFlower2018-05-15
| | | | PiperOrigin-RevId: 196640024
* Reland improve fusion logic of (a dot b) * alphaGravatar A. Unique TensorFlower2018-05-15
| | | | | | | | | | | | | | | The previous fusion approach didn't work because a multiplication by a scalar value will be changed into an explicit broadcast. Another issue that is fixed in this CL is retrieving the constant value from the literal. This depends on the PrimitiveType, before we always assumed it to be double. Also when checking ImplementedAsGemm() we should not call it recursively, but instead just the check related to kDot. Finally add an execution test and adjust the fusion logic test. The fix for the issue that caused the revert is that we check earlier that consumer->operand_count() is 2. Also, we fix the call to Get() to pass {} instead of {0}. And we handle an output fusion node in GemmThunk to extract the dimension numbers from the dot operation. PiperOrigin-RevId: 196631031
* [TF:XLA] Scheduling test which demonstrates that we are ignoring the memory ↵Gravatar Dimitris Vardoulakis2018-05-14
| | | | | | needed by subcomputations. PiperOrigin-RevId: 196618347
* Added type check to feature column keys. So that users will get meaningful ↵Gravatar Mustafa Ispir2018-05-14
| | | | | | error messages in situations like: #19219 PiperOrigin-RevId: 196616638
* Partial update of tf.keras to the Keras 2.1.6 API.Gravatar Pavithra Vijay2018-05-14
| | | | | | | | | | | | | | | | | Changes included are: - Fix `batch_dot` when `axes=None` - Add axis=-1 as an argument to keras.backend.softmax - Fix ctc_batch_cost() error when batch_size = 1 - Print previous best in ModelCheckpoint callback - Fix ReduceLROnPlateau callback - Extend RemoteMonitor to send data as application/json - Fix default dilation rate value in 2D separable conv. - Fix for MobileNet model with undefined shape - Disable require_flatten in nasnet & Add an error message for undefined shape. - Improve tests by designating dtype of sample data - Multi_gpu_model supporting legacy/fullCPU/fullGPU PiperOrigin-RevId: 196615376
* Function should inherit device information from the caller site.Gravatar Youlong Cheng2018-05-14
| | | | PiperOrigin-RevId: 196614376
* Update SCALED mode to use the full quantized range of -128..127 when possible.Gravatar A. Unique TensorFlower2018-05-14
| | | | PiperOrigin-RevId: 196606455
* [XLA] Move more comparison functions to non-test library.Gravatar Chris Leary2018-05-14
| | | | PiperOrigin-RevId: 196605347
* Move model_to_estimator utility into Estimator from Keras.Gravatar Michael Case2018-05-14
| | | | | | | Working on untangling TF/Estimator deps. We would like to get to a state where Estimator depends on Keras and not vice versa PiperOrigin-RevId: 196605024
* Fix a bug in HloInstruction::ImplicitlyBroadcastsOperand where operands with ↵Gravatar Yunxing Dai2018-05-14
| | | | | | the same dimension but different types are not considered broadcast. PiperOrigin-RevId: 196603348
* Adds CsvDataset, which both reads and parses files.Gravatar Rachel Lim2018-05-14
| | | | | | | | | Example usage: dataset = tf.contrib.data.CsvDataset(filenames, record_defaults=record_defaults, **kwargs) Motivation: Fusing reading and parsing is more performant and correct than the previous canonical CSV parsing flow (`dataset = tf.data.TextLineDataset(filenames).map(lambda l: tf.decode_csv(l, **kwargs))`) Closes #19077. PiperOrigin-RevId: 196601381
* Disable LinearOperatorKroneckerTest.test_solve_{with_broadcast} temporarily.Gravatar A. Unique TensorFlower2018-05-14
| | | | PiperOrigin-RevId: 196601310
* [tf.data] Add optional `args` argument to `Dataset.from_generator()`.Gravatar Derek Murray2018-05-14
| | | | | | | | | | | | | | | | | | | | | | | | The new argument allows you to parameterize the generator with the value of a tf.Tensor, enabling `Dataset.from_generator()` to be initialized from a placeholder or used in a nested expression (such as `flat_map()` or `parallel_interleave()`). For example: ```python def generator(n): for _ in range(n): yield n # Define a generator based on a placeholder. placeholder = tf.placeholder(tf.int64, shape=[]) dataset = tf.data.Dataset.from_generator(generator, tf.int64, args=(placeholder,)) # Define a generator based on the value of a nested dataset element. dataset = tf.data.Dataset.range(10).flat_map( lambda i: tf.data.Dataset.from_generator(generator, tf.int64, args=(i,))) ``` Fixes #19269. Partially addresses issue #13101. PiperOrigin-RevId: 196598650
* Introduce LossScalingOptimizer for mixed precision training.Gravatar James Qin2018-05-14
| | | | PiperOrigin-RevId: 196597196
* Add an option to execute eval on cpu, regardless of training runs on TPU.Gravatar A. Unique TensorFlower2018-05-14
| | | | | | | This will let users to benefit from TPU training, but avoid complex eval metrics functions to be ported to TPU. PiperOrigin-RevId: 196587755
* Refactoring: Make OpResolver return const pointer.Gravatar Yu-Cheng Ling2018-05-14
| | | | PiperOrigin-RevId: 196587227
* Automated g4 rollback of changelist 196565296Gravatar Pavithra Vijay2018-05-14
| | | | PiperOrigin-RevId: 196586601
* ClangTidy - Readability cleanup:/code-findings-fixes.Gravatar A. Unique TensorFlower2018-05-14
| | | | | | | | | | * unused using-declarations * redundant string conversions * C-style casts * redundant get() call on smart pointer * the 'empty' method should be used to check for emptiness instead of 'size' PiperOrigin-RevId: 196585984
* Make sure that variables aren't created as partition variables since only ↵Gravatar Suharsh Sivakumar2018-05-14
| | | | | | non-scalar partition variables are supported. PiperOrigin-RevId: 196584749
* Fix bug where custom layers could crash.Gravatar Reed Wanderman-Milne2018-05-14
| | | | | | Layer.add_weight would crash when called without a dtype or initializer. PiperOrigin-RevId: 196583182
* Fix functional.While(), functional.For(rewrite_with_while)Gravatar James Qin2018-05-14
| | | | | | When executing on GPU, synchronously copy cond result from device to host. PiperOrigin-RevId: 196580820
* Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2018-05-14
| | | | PiperOrigin-RevId: 196580619
* Add ExplicitShapes as a new shape inference function for Ops withGravatar A. Unique TensorFlower2018-05-14
| | | | | | multiple outputs, each of which is explicitly declared. PiperOrigin-RevId: 196579920
* Remove CuDNNRNN timing test.Gravatar Pavithra Vijay2018-05-14
| | | | PiperOrigin-RevId: 196578043
* Fix copy functions of MutableOpResolverGravatar Yu-Cheng Ling2018-05-14
| | | | PiperOrigin-RevId: 196577314
* Used aligned allocation for vector cache.Gravatar Shashi Shekhar2018-05-14
| | | | PiperOrigin-RevId: 196576497
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-05-14
| | | | PiperOrigin-RevId: 196576189
* Internal change.Gravatar A. Unique TensorFlower2018-05-14
| | | | PiperOrigin-RevId: 196575483
* Refactoring: Remove lite/tools:mutable_op_resolver dependency.Gravatar Yu-Cheng Ling2018-05-14
| | | | PiperOrigin-RevId: 196575387
* Stricter analysis for functional conditional generationGravatar A. Unique TensorFlower2018-05-14
| | | | PiperOrigin-RevId: 196573938
* Do shape validation in ScatterNd kernel, not just the shape inference function.Gravatar Alexandre Passos2018-05-14
| | | | | | Fixes #18648 PiperOrigin-RevId: 196572262
* Fail gracefully with a helpful error message when provided with conflictingGravatar Asim Shankar2018-05-14
| | | | | | | | | | | | visible_devices_list. See #19083 See #18861 More generally, this change avoids assertion failures (that will bring the whole process down) on a few code-paths that can be triggerred by user input. PiperOrigin-RevId: 196572013
* Internal change.Gravatar A. Unique TensorFlower2018-05-14
| | | | PiperOrigin-RevId: 196570742
* Make CollectiveParamReducerLocal::InitInstanceSharedParams non-blocking.Gravatar A. Unique TensorFlower2018-05-14
| | | | PiperOrigin-RevId: 196570011
* Automated g4 rollback of changelist 196456687Gravatar Gunhan Gulsoy2018-05-14
| | | | PiperOrigin-RevId: 196567964
* Add score filtering to tf.image.non_max_suppression.Gravatar A. Unique TensorFlower2018-05-14
| | | | PiperOrigin-RevId: 196567928
* Update the eager programmer's guide to reflect the fact that "==" is notGravatar Akshay Agrawal2018-05-14
| | | | | | implemented in the natural way for the Tensor class. PiperOrigin-RevId: 196566940
* ReverseDFS scheduler reverses the heuristics used in DFSScheduler.Gravatar Yunxing Dai2018-05-14
| | | | | | Also fixes hlo_schedule_test to remove the expected order on unrelated operations. PiperOrigin-RevId: 196565651
* Disable flaky cudnn_recurrent testGravatar Gunhan Gulsoy2018-05-14
| | | | PiperOrigin-RevId: 196565296
* Reenable virtual gpu test, and decrease the number of testing rounds.Gravatar Guangda Lai2018-05-14
| | | | PiperOrigin-RevId: 196565153
* [XLA] Ergonomic improvements to --xla_hlo_profile.Gravatar Justin Lebar2018-05-14
| | | | | | | | | | | | | | | | | | | | | - Don't display ops with 0 optimal seconds and 0 actual cycles. These are ops that were expected to be free and were actually free. - Fix HloCostAnalysis to mark parameters, constants, and get-tuple-element as expected-to-be-free per the definition above. - Allow optimal-seconds < 0 to indicate "I don't know". Use this for custom calls, and then hide such ops from the "seconds above the optimum" table. - Don't display "<none>" and "<unknown>" -- instead, just display the empty string. Less visual noise. - Instead of showing ~5 ops per category in the categories tables, show everything. This isn't so noisy now that we're hiding "free" ops, and it makes finding optimization opportunities much easier. PiperOrigin-RevId: 196564177
* Add If op rewriter.Gravatar Jacques Pienaar2018-05-14
| | | | | | | | | | | | * Add attribute to If op to indicate if lowering to switch-merge form is needed; * Add initial version of If op rewriter than transforms a If op into switch/merge nodes (as would have been constructed via tf.cond) if the If op has the lowering attribute set. - The pass is not ready for general use and, for example, does not support reference data types. PiperOrigin-RevId: 196563421
* Reserves 'context' key in TPUEstimator params dict.Gravatar Jianwei Xie2018-05-14
| | | | PiperOrigin-RevId: 196561620
* Add CheckpointInputPipelineHook to the API docs.Gravatar Saurabh Saxena2018-05-14
| | | | PiperOrigin-RevId: 196560221
* Added support for strided slicing of symbolic shapesGravatar Benoit Steiner2018-05-14
| | | | PiperOrigin-RevId: 196558466
* Resolve inlined function input/output types from GrapplerFunctionItem.Gravatar A. Unique TensorFlower2018-05-14
| | | | | | Remove duplicated code to resolve type from attributes. PiperOrigin-RevId: 196558061
* Updated speech commands example to use new datasetGravatar Pete Warden2018-05-14
| | | | PiperOrigin-RevId: 196557132
* Various ClangTidy-inspired fixes.Gravatar A. Unique TensorFlower2018-05-14
| | | | PiperOrigin-RevId: 196556727