aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
...
* | | | | | Fix a bug: the use of sequence-point boolean operators here had theGravatar A. Unique TensorFlower2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | unintended effect of causing the second line not to run at all depending on the result from the first line. PiperOrigin-RevId: 215466006
* | | | | | [XLA] A test that disables layout assignment should only contain layoutGravatar Bixia Zheng2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | consistent HLO instructions. Fix a dot test that disables layout assignment pass to not generate layout inconsistent HLO instructions. This includes only adding the dot result to an addend with the same layout, and disabling algebraic simplification which may transform a dot to a multiplication with inconsistent layouts. PiperOrigin-RevId: 215463477
* | | | | | Do not warn about loss of accuracy in trivial cases when all array elementsGravatar A. Unique TensorFlower2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | are equal to either the min or the max value, so that they are trivially exactly quantized. This case does not normally occur for true learned weights, which is what this warning is intended for. PiperOrigin-RevId: 215463096
* | | | | | Merge pull request #21208 from ↵Gravatar TensorFlower Gardener2018-10-02
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | kingofthebongo2008:version_info_cc_generated_only_once PiperOrigin-RevId: 215462171
* | | | | | | Add missing documentation for use_tpu hparamGravatar Suyog Gupta2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215462000
* | | | | | | Fixes for few issues in HloModule::CreateFromProto()Gravatar A. Unique TensorFlower2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215460064
* | | | | | | Add support for multiple input/output numpy arrays when using Keras APIs.Gravatar Anjali Sridhar2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215459075
* | | | | | | Upgrade cloud tpu profiler to 1.12.0.Gravatar A. Unique TensorFlower2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215454323
* | | | | | | Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215448397
* | | | | | | Merge pull request #17672 from joeyearsley:patch-3Gravatar TensorFlower Gardener2018-10-02
|\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215447391
* | | | | | | | Support shape_invariants in while_v2. Note that this arg is temporary and ↵Gravatar Saurabh Saxena2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | may be replaced by automatic shape inference in TF 2.0 (or before). Add a output_shapes attr to While op to allow output shapes to be different from the incoming loop_vars. PiperOrigin-RevId: 215446737
* | | | | | | | Add proto serialization/deserialization testing to the HLO parser tests.Gravatar Mark Heffernan2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many of the HLO parser tests verify that an text form of an HLO module preserves all information when running through ToString then parsing. It makes sense to also use these tests to exercise proto serialization/deserialization. This is done by adding additional instantiations of the parameterized parsing tests. This caught several bugs which are fixed in this CL: (1) Domain instructions were not being serialized properly. (2) Host send/recv instructions did not preserve the is_host_transfer bit. (3) Sparse literals could not be serialized or deserialized. PiperOrigin-RevId: 215445200
* | | | | | | | Copy tf.distributions to tfp.distributions, and deprecate the ↵Gravatar A. Unique TensorFlower2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tf.distributions API. PiperOrigin-RevId: 215441733
* | | | | | | | Allow passing --allow_nonexistent_arrays via toco_convertGravatar A. Unique TensorFlower2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215440829
* | | | | | | | Disable fused_conv tests that don't build in open-source.Gravatar Todd Wang2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215440356
* | | | | | | | Allow creating a list from a tensor. Fix a few inconsistencies in the tensor ↵Gravatar Dan Moldovan2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | list constructors. PiperOrigin-RevId: 215435720
* | | | | | | | Fix the case when an object may have multiple directives with the same ↵Gravatar Dan Moldovan2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | annotation. PiperOrigin-RevId: 215435613
* | | | | | | | Merge pull request #22126 from ConcurrencyPractitioner:masterGravatar TensorFlower Gardener2018-10-02
|\ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215431884
* | | | | | | | | [XLA] Replace the last FlatMap in XLA with a simple array.Gravatar Benjamin Kramer2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A hash map for 18 pointers is just a waste of space. PiperOrigin-RevId: 215428176
* | | | | | | | | Remove dependency on contrib model_variable.Gravatar Suharsh Sivakumar2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also remove add_arg_scope. PiperOrigin-RevId: 215426187
* | | | | | | | | [XLA] Fix some outdated comments referring to FlatMapGravatar Benjamin Kramer2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also convert unordered_map to flat/node_hash_map where the comments allow. PiperOrigin-RevId: 215410566
* | | | | | | | | Generate an error when --rnn_states refers to array names that aren't ↵Gravatar A. Unique TensorFlower2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | produced/consumed by any op. PiperOrigin-RevId: 215402308
| | | | | | | | * Remove Ignite Dataset SSL tests by internal policy.Gravatar Anton Dmitriev2018-10-02
| | | | | | | | |
| | | | | | | | * Fix merge artifacts: replace Dataset by DatasetSource in Ignite Dataset.Gravatar Anton Dmitriev2018-10-02
| | | | | | | | |
* | | | | | | | | Use xlogy in a few places in TFP to avoid NaN's for certain special cases.Gravatar A. Unique TensorFlower2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215392621
| | * | | | | | | Updated ordering for kwargsGravatar joe yearsley2018-10-02
| | | | | | | | |
* | | | | | | | | Make StatelessRandomOpsTest.testRandomNormalIsFinite actually test ↵Gravatar Peter Hawkins2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | stateless_random_normal. Fixes #22611 PiperOrigin-RevId: 215385610
* | | | | | | | | Export endpoint for the version of the `regex_replace` function that calls ↵Gravatar A. Unique TensorFlower2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | StaticRegexReplace. PiperOrigin-RevId: 215371291
* | | | | | | | | Add a hint parameter to TransferLiteralToDeviceAsync that the implementation ↵Gravatar A. Unique TensorFlower2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | can use to accelerate transfers. PiperOrigin-RevId: 215362667
* | | | | | | | | Fix layout assignment for cross module all reduceGravatar A. Unique TensorFlower2018-10-02
| |_|_|_|_|_|_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we could have ended up with the different HLOs being assigned different layouts what made lowering impossible. This change enforces a consistent layout between the communicating nodes the same way it is done for send&recv pairs. PiperOrigin-RevId: 215359420
* | | | | | | | compat: Update forward compatibility horizon to 2018-10-02Gravatar A. Unique TensorFlower2018-10-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215354927
* | | | | | | | Check that IsValid{Input|Output}Tensor is only given non-control edgesGravatar Sanjoy Das2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215338658
* | | | | | | | Loosen test bounds.Gravatar Revan Sopher2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215338403
* | | | | | | | Merge pull request #21958 from MattConley:CudaOccupancyGravatar TensorFlower Gardener2018-10-01
|\ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215331087
* | | | | | | | | Add mode_override to the TPU embedding enqueue ops. This allows the mode to beGravatar A. Unique TensorFlower2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | overridden at runtime allowing dynamic switching between inference and training modes. Not fully implemented yet. PiperOrigin-RevId: 215325071
* | | | | | | | | [XLA] Migrate from gtl::FlatSet to absl::flat_hash_setGravatar Benjamin Kramer2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215324035
* | | | | | | | | Make Keras/TPU more robust to closed TF sessions.Gravatar Russell Power2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215313156
* | | | | | | | | Merge pull request #21868 from kingofthebongo2008:upgrade_protobuf_to_v_3.6.1Gravatar TensorFlower Gardener2018-10-01
|\ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215312707
* | | | | | | | | | [XLA] Add kAllToAll and kCollectivePermute to ↵Gravatar A. Unique TensorFlower2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EffectiveOperandPrecisionIsOutputPrecision list. PiperOrigin-RevId: 215311766
* | | | | | | | | | [tf.data] Adding `tf.data.Options()`, `tf.data.Dataset.options()`, and ↵Gravatar Jiri Simsa2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `tf.data.Dataset.with_options()` to make it possible to respectively represent, get, and set options, such as optimization configuration, of a tf.data input pipeline. PiperOrigin-RevId: 215310764
* | | | | | | | | | Merge pull request #22519 from jayfurmanek:nccl2_configureGravatar TensorFlower Gardener2018-10-01
|\ \ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215310536
* | | | | | | | | | | [tf.data] More robust solution for input pipeline <--> performance model ↵Gravatar Jiri Simsa2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | coordination. PiperOrigin-RevId: 215309735
| | | | | | | | | | * Minor changes, hanged CHECK_GE to DCHECK_GE due to code policy changeGravatar Xiaoming (Jason) Cui2018-10-01
| | | | | | | | | | |
* | | | | | | | | | | Mark bfloat16 as supported for ExponentialMovingAverage.Gravatar A. Unique TensorFlower2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 215307701
| | | | | | | | | | * Merge branch 'master' into cuixiaom_disable_MKLGravatar Xiaoming (Jason) Cui2018-10-01
| | | | | | | | | | |\
| | | | | | | | | | * \ Merge the branch with master branchGravatar Xiaoming (Jason) Cui2018-10-01
| | | | | | | | | | |\ \
* | | | | | | | | | | | | [tf.data] Deprecate `tf.contrib.data` and introduce `tf.data.experimental` ↵Gravatar Derek Murray2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to replace it. This change prepares `tf.data` for TensorFlow 2.0, where `tf.contrib` will no longer exist. It retains the pre-existing endpoints in `tf.contrib.data` with deprecation warnings. Note there are some exceptions to the move: * Deprecated symbols in `tf.contrib.data` have not been moved to `tf.data.experimental`, because replacements already exist. * `tf.contrib.data.LMDBDataset` has not been moved, because we plan to move it to a SIG-maintained repository. * `tf.contrib.data.assert_element_shape()` has not yet been moved, because it depends on functionality in `tf.contrib`, and it will move in a later change. * `tf.contrib.data.AUTOTUNE` has not yet been moved, because we have not yet determined how to `tf_export()` a Python integer. * The stats-related API endpoints have not yet appeared in a released version of TensorFlow, so these are moved to `tf.data.experimental` without retaining an endpoint in `tf.contrib.data`. In addition, this change includes some build rule and ApiDef refactoring: * Some of the "//third_party/tensorflow/python:training" dependencies had to be split in order to avoid a circular dependency. * The `tf.contrib.stateless` ops now have a private core library for the generated wrappers (and accordingly are hidden in their ApiDef) so that `tf.data.experimental.sample_from_datasets()` can depend on them. PiperOrigin-RevId: 215304249
* | | | | | | | | | | | | Change semantics of DistributionStrategy.update() to make sure theGravatar A. Unique TensorFlower2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | output depends on the updates across all mirrors. Before this change, update() would return a Mirrored value that where each component was an update to a single mirror. This caused a problem since for reading purposes other DistributionStrategy methods would consider it okay to read any single component, and so if you for example did something like session.run(strategy.update(...)) it would only perform the update on one replica. The fix is to have the output be a Mirrored value that is actually the identity operation returning the output on that device, but that has a control dependency making sure that the update actually happens on all the replicas. This fix was already present in MirroredVariable._assign_func, this CL moves the fix into update() and generalizes it to multiple return values. To disable this new grouping behavior, you may now pass "grouped=False" to update(). For example, some callers (like Optimizer) are performing a lot of updates and they prefer to group all of them together at once for performance reasons. In this case, we still want to make sure the caller executes the update on all replicas, so we return an unwrapped value instead of a Mirrored value. This has the happy side effect of removing a bunch of unwrap calls in client code, since unwrapping was the only safe way to use the Mirrored value we used to return. PiperOrigin-RevId: 215301909
* | | | | | | | | | | | | Override implementation of log survival for Exponential distribution to ↵Gravatar A. Unique TensorFlower2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | better handle small values. PiperOrigin-RevId: 215299532
* | | | | | | | | | | | | [TF/XLA] Optimize `Encapsulator::GetFunctionNameAttr()`.Gravatar Derek Murray2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The previous version was hitting a very slow path in `GetNodeAttr()`, which is expensive when the named attr is not found. This change inlines the logic of finding the two relevant attrs inside `GetFunctionNameAttr()` and avoids constructing a status object with a serialized `NodeDef` when the attr can't be found. PiperOrigin-RevId: 215298411