aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow
Commit message (Collapse)AuthorAge
...
* Throw error when evaluating have variable target in GradientTape.Gravatar Tamara Norman2018-10-09
| | | | PiperOrigin-RevId: 216368178
* Internal changeGravatar Jared Duke2018-10-09
| | | | PiperOrigin-RevId: 216367867
* Avoid extra calls to set_random_seed, as it is already called inGravatar Gunhan Gulsoy2018-10-09
| | | | | | tensorflowtestcase. PiperOrigin-RevId: 216363450
* Allowing for mixture of V1 and V2 feature columns usage in canned ↵Gravatar Rohan Jain2018-10-09
| | | | | | | | | | estimators. This is required for TF hub use cases where users might send in new feature columns to old model code. Implemented this support by making V2 feature columns support the V1 API. This is needed temporarily and would definitely be removed by TF 2.0, possibly earlier depending on what guarantees are provided by TF hub. The only case we don't allow here is mixing in V2 shared embedding columns with V1 Feature columns. V2 Shared FC's depend on a SharedEmbeddingState manager that would have to be passed in to the various API's and there wasn't really a very clean way to make that work. Mixing V2 feature columns with V1 shared embedding columns is fine though and along with all other combinations PiperOrigin-RevId: 216359041
* Fixing Toco for exporting graphs with stringsGravatar A. Unique TensorFlower2018-10-09
| | | | | | | If the graph contains not constant array with strings it fails because the array's size can't be estimated. PiperOrigin-RevId: 216356162
* Removed unused load statements from the core BUILD.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216354906
* Automated rollback of commit 375c109659d2d0e6265447dffdeb460693b3cccfGravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216350134
* compat: Update forward compatibility horizon to 2018-10-09Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216323343
* Enable support for PRED values in KeyValueSort for the HloEvaluator.Gravatar Adrian Kuegel2018-10-09
| | | | PiperOrigin-RevId: 216315110
* Automated rollback of commit 5f308cb408eb46ec9af0546be6b9ae1d5166b185Gravatar A. Unique TensorFlower2018-10-08
| | | | PiperOrigin-RevId: 216309111
* Refactor CalculateOutputSize() from VirtualScheduler protected member ↵Gravatar Peter Ma2018-10-08
| | | | | | function to utils; Refactor EstimateSize() from memory_optimizer.cc to utils; some small changes for readability improvement PiperOrigin-RevId: 216307257
* Add Floor_mod to schema.Gravatar A. Unique TensorFlower2018-10-08
| | | | PiperOrigin-RevId: 216303340
* Automated rollback of commit 07df147ab20c4a5329148e5fb5f7f6b187cb73a4Gravatar Reed Wanderman-Milne2018-10-08
| | | | PiperOrigin-RevId: 216299809
* [XLA] Introduce input/output alias config.Gravatar Yunxing Dai2018-10-08
| | | | | | | | - This CL intruduces input/output alias config in HLO module that allows any HLO pass to configure it. Once the alias_config is set, each backend needs to follow the contract during execution time to make sure the input and output are indeed aliased. - Copy insertion / buffer assignment and alias analysis has been updated to correctly honor the config and avoid any possible liveness interference. PiperOrigin-RevId: 216299501
* Add a tracing::ScopedActivity event to track the duration of a Session::Run()Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | call for better xprof tracing. Also annotate synchronous op execution with the session-run id (or step_id) as metadata leveraging the support introduced in cl/215985561. This should enable highlighting the duration of a Session::Run and all the ops that ran in it for visualizing latency regressions in the case of CPU inference. PiperOrigin-RevId: 216284682
* Register int64 SUM GPU kernel.Gravatar James Qin2018-10-08
| | | | PiperOrigin-RevId: 216280913
* Fix the seeding for `Dataset.shuffle(..., reshuffle_each_iteration=False)`.Gravatar Derek Murray2018-10-08
| | | | | | | | | Previously, we were passing the first (graph-level) seed for both the graph-level and op-level seeds when creating a C++ dataset. This change passes the op-level seed to the appropriate point, and adds a test for the behavior with graph-but-not-op-level seeds. PiperOrigin-RevId: 216280641
* Consolidate device parameter arguments into a shared DeviceInfo structGravatar A. Unique TensorFlower2018-10-08
| | | | PiperOrigin-RevId: 216280197
* Remove deprecations for some of the endpoints in ApiDef files. These changesGravatar Anna R2018-10-08
| | | | | | | | | | | | | | | are made according to https://github.com/tensorflow/community/pull/16. I am keeping a few symbols deprecated not mentioned in the doc: tf.diag - it seems best to keep it next to tf.linalg.diag, so that the two are easy to compare and decide which one to use. The plan is to rename tf.diag to tf.tensor_diag. tf.is_nan - similar to tf.is_inf, tf.is_finite, tf.is_numeric_tensor which are all getting deprecated and replaced by symbols in tf.debugging. tf.string_to_number - other string endpoints in root namespace are getting deprecated: for e.g. tf.substr, tf.string_join. tf.dequantize - all quantization ops should be under tf.quantize. I probably missed this one. tf.check_numerics - similar to other debugging ops that are getting moved to tf.debugging. tf.squared_difference - moved to tf.math namespace and not as popular as some other math ops such as tf.add to justify keeping endpoint in root. tf.decode_raw - similar to other ops such as tf.decode_csv that are getting moved to tf.io.decode_csv. PiperOrigin-RevId: 216278010
* Avoid calling get_default_graph() during tf.enable_eager_execution()Gravatar Gunhan Gulsoy2018-10-08
| | | | PiperOrigin-RevId: 216270497
* Internal changeGravatar Jared Duke2018-10-08
| | | | PiperOrigin-RevId: 216270385
* Convert TensorFlow's aws dependency to new third party import method.Gravatar A. Unique TensorFlower2018-10-08
| | | | PiperOrigin-RevId: 216265275
* [XLA] Make overly-specific ShapeUtil predicate a little more general.Gravatar Chris Leary2018-10-08
| | | | PiperOrigin-RevId: 216263039
* Automated rollback of commit 13b47e6c4f9d7b295948b1057139bf676e394b6fGravatar Derek Murray2018-10-08
| | | | PiperOrigin-RevId: 216260575
* Internal Change.Gravatar Michael Case2018-10-08
| | | | PiperOrigin-RevId: 216260437
* Simple comment fix in CheckpointInputPipelineHook.Gravatar Ruoxin Sang2018-10-08
| | | | PiperOrigin-RevId: 216260216
* Fix issue with type inference for ops with fixed output typesGravatar Jared Duke2018-10-08
| | | | | | | Use the ArgDef::type field when available for propagating the output types from a given unsupported operator. PiperOrigin-RevId: 216257741
* Automated rollback of commit 09b0fc199129e0f487a39741bdf674cf09035cbcGravatar Derek Murray2018-10-08
| | | | PiperOrigin-RevId: 216256115
* [XLA] Simplify loop nesting in HandleConvolutionGravatar David Majnemer2018-10-08
| | | | | | | | | | | | | | | The calculation of a spatial coordinate in the kernel and activations is not dependent on which part of the contracted dimension (input feature) we are in. Rather than nesting the loops, the loops can be siblings: - One loop over spatial dimensions - One loop over the input feature group This reduces the nesting depth which makes the code a little more readable and might be slightly faster due work invariant in the spatial loop getting hoisted out. PiperOrigin-RevId: 216255839
* Ignore args and kwargs for defun's get_concrete_fn if `PolymorphicFunction` ↵Gravatar Shivani Agrawal2018-10-08
| | | | | | | | was created with an input_signature. PiperOrigin-RevId: 216253122
* Merge pull request #22783 from Intel-tensorflow:sfu2/clean_mklmlGravatar TensorFlower Gardener2018-10-08
|\ | | | | | | PiperOrigin-RevId: 216253115
* | Add more logging to the convolution transformations.Gravatar Tim Shen2018-10-08
| | | | | | | | PiperOrigin-RevId: 216252980
* | Add custom call with layout constraints.Gravatar Mark Heffernan2018-10-08
| | | | | | | | | | | | Add a variant of CustomCall which specifies arbitrary layout constraints on the operands and result. The existing non-layout-constrained CustomCall is changed to have no layout preference and can now be assigned arbitrary layouts by layout assignment. PiperOrigin-RevId: 216249615
* | Update performance documentation.Gravatar Shashi Shekhar2018-10-08
| | | | | | | | PiperOrigin-RevId: 216248418
* | [tf.data] Choose non-deterministic seed once per Python-level `Dataset` object.Gravatar Derek Murray2018-10-08
| | | | | | | | | | | | | | | | | | | | This changes the behavior of randomness-introducing datasets (`tf.data.Dataset.shuffle()`, `tf.data.experimental.shuffle_and_repeat()`, and `tf.data.experimental.RandomDataset`). Previously, when you used the same `tf.data.Dataset` object multiple times in a pipeline (e.g. by zipping two datasets derived from the same randomness-introducing dataset) *and* you did not specify an explicit `seed`, the implementation would choose different non-deterministic seeds for each use of the `Dataset` object. With this change, the seed will be chosen once per `Dataset` (technically, once per `Dataset`-`Graph` combination, due to the vagaries of capturing state in `Dataset.make_one_shot_iterator()`), which means that all uses of the same dataset object will observe the same sequence of values. This change also revealed a small bug in how `Dataset.shuffle(..., reshuffle_each_iteration=False)` is serialized when an explicit seed is specified. The op-level seed was dropped, which could lead to non-deterministic behavior. This change fixes that issue by forwarding the op-level seed to the appropriate place. PiperOrigin-RevId: 216248013
* | Automated rollback of commit 295b3c80555cc82d8d70faf96a47681e1d904b9cGravatar Derek Murray2018-10-08
| | | | | | | | PiperOrigin-RevId: 216247929
* | Merge pull request #22735 from Tingbopku:fix-referenceGravatar TensorFlower Gardener2018-10-08
|\ \ | | | | | | | | | PiperOrigin-RevId: 216245934
* \ \ Merge pull request #22719 from samikama:fix_pip_packageGravatar TensorFlower Gardener2018-10-08
|\ \ \ | | | | | | | | | | | | PiperOrigin-RevId: 216245301
* | | | Partial support tfe.defun in tf.gradients.Gravatar Alexandre Passos2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Doesn't attempt to deal with cases where we might have already generated the functiondef for the parent function as in that case we cannot easily modify the forward pass. PiperOrigin-RevId: 216243224
* | | | Allow using more than one converter in the testing harness.Gravatar Dan Moldovan2018-10-08
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 216242862
* | | | Add tf.BenchmarkConfig that returns a session config appropriate for ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | benchmarking. At the moment, it returns a default config with only Grappler dependency optimizer disabled. Many benchmarks wrap the subgraph they want to time in control_flow_ops.group() to avoid including the overhead of copying the output back to the Python client in the measurement. In the graph, this only adds a control dependency between the subgraph output and the fetch node, which in turn (often) causes the dependency optimizer to turn all nodes in the graph into no-ops. PiperOrigin-RevId: 216242463
* | | | Changed Adam algorithm variant formula from sqrt(max(v, epsilon**2)) to ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | sqrt(v + epsilon**2) and changed flag name accordingly. PiperOrigin-RevId: 216240045
* | | | Add timeout mechanism to Grappler meta optimizer. This is only a best-effort ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | mechanism, since the meta optimizer only checks if it has been cancelled before running each sub-optimizer. We can add cancellation to each sub-optimizer if necessary. PiperOrigin-RevId: 216234262
* | | | Fix a couple of reference leaksGravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 216230391
* | | | Merge pull request #22303 from JuliaComputing:kf/broadcastshapevalGravatar TensorFlower Gardener2018-10-08
|\ \ \ \ | | | | | | | | | | | | | | | PiperOrigin-RevId: 216228494
* | | | | Remove the restrictions that constant resolution of reduce_sum operators ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | must be on axis 0, and can only be on 1 or 2-d inputs. PiperOrigin-RevId: 216226776
* | | | | Fix the steps_per_epoch when training on mnistGravatar Sourabh Bajaj2018-10-08
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216225505
* | | | | Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216224026
* | | | | Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216217887
* | | | | Merge pull request #21658 from lowintelligence:masterGravatar TensorFlower Gardener2018-10-08
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 216217509