| Commit message (Collapse) | Author | Age |
... | |
|
|
|
| |
PiperOrigin-RevId: 161891114
|
|
|
|
| |
PiperOrigin-RevId: 161890056
|
|
|
|
|
|
| |
This makes visualizing the graph easier and is also what Python does.
PiperOrigin-RevId: 161884431
|
|
|
|
|
|
| |
case where the weights are computed outside and set to the WALS object.
PiperOrigin-RevId: 161882927
|
|
|
|
| |
PiperOrigin-RevId: 161879977
|
|
|
|
|
|
| |
Tested Conv2DShape with NCHW_VECT_C format.
PiperOrigin-RevId: 161879362
|
|
|
|
|
|
|
| |
This is 2nd of 4 CLs that implement BatchNormGrad. The ability to clone gives us PARALLEL_CPU support.
RELNOTES: n/a
PiperOrigin-RevId: 161877575
|
|
|
|
|
|
|
|
|
| |
This also fixes their ordering in xla/reference_util.h
This also adds a few stride tests to reference_util_test and
adds LiteralTestUtil::ExpectR4Near().
PiperOrigin-RevId: 161876759
|
|
|
|
|
|
|
|
|
|
| |
features to canned Estimators. The current situation confuses users
transitioning from the contrib Estimators because those support passing Tensor
as features.
Fixes #11252
PiperOrigin-RevId: 161876642
|
|
|
|
|
|
| |
c_api_internal doesn't actually export c_api.h, which python_api.h depends on.
PiperOrigin-RevId: 161874954
|
|
|
|
| |
PiperOrigin-RevId: 161874836
|
|
|
|
|
|
|
| |
To test the results of compilation(aka Executable) are the same, we need a way to tell if they are equal to each other.
RELNOTES: n/a
PiperOrigin-RevId: 161873754
|
|
|
|
| |
PiperOrigin-RevId: 161873229
|
|
|
|
|
|
|
|
|
| |
Also minor renaming of OptimizationCallback to ModuleHook, because it's shorter
and describes the type slightly better.
Also see #11462
PiperOrigin-RevId: 161869637
|
|
|
|
|
|
| |
Previously, SummaryMetadata had been excluded from the namespace because it had been absent from a certain list.
PiperOrigin-RevId: 161869618
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
END_PUBLIC
---
Commit fe5338177 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Go: Update generated wrapper functions for TensorFlow ops.
PiperOrigin-RevId: 161727345
---
Commit c65f69119 authored by Eugene Brevdo<ebrevdo@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Factor out DenseUpdate ops into dense_update_functor build dep.
Also add support for complex types.
PiperOrigin-RevId: 161726749
---
Commit 9a172989e authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Update ops-related pbtxt files.
PiperOrigin-RevId: 161726324
---
Commit fd5530d6e authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
adding bazel-toolchains repo to workspace. This repo will be necessary for remote execution (specifically for cross OS compilation)
PiperOrigin-RevId: 161719899
---
Commit 71c4ec8ed authored by Derek Murray<mrry@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Add a mechanism for switching between multiple iterators by feeding a handle.
With this change, you can do the following:
1. Fetch a string handle for any iterator, by evaluating the result of
`Iterator.string_handle()`.
2. Define an `Iterator` object based on a `tf.string` placeholder handle.
3. Feed the placeholder using an evaluated string handle to use a particular
iterator in a particular step.
Concretely, this allows you to define two iterators for a training dataset and
a test dataset, and choose which one to use on a per-run basis:
```python
train_iterator = tf.contrib.data.Dataset(...).make_one_shot_iterator()
train_iterator_handle = sess.run(train_iterator.string_handle())
test_iterator = tf.contrib.data.Dataset(...).make_one_shot_iterator()
test_iterator_handle = sess.run(test_iterator.string_handle())
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.contrib.data.Iterator.from_string_handle(
handle, train_iterator.output_types)
next_element = iterator.get_next()
loss = f(next_element)
train_loss = sess.run(loss, feed_dict={handle: train_iterator_handle})
test_loss = sess.run(loss, feed_dict={handle: test_iterator_handle})
```
PiperOrigin-RevId: 161719836
---
Commit 6d6dda807 authored by Kay Zhu<kayzhu@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[TF:XLA] Fix an issue where plugin/Executor backend is used by default when TF
is built from source with XLA support. See Github issue #11122.
The priority of the executor backend is set to be higher than the default (50)
and CPUs (<100), and is therefore selected as the default when tf.device is not
explicitly specified.
PiperOrigin-RevId: 161717173
---
Commit 6b28eb084 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Rename HloLocation to HloPosition, to avoid ambiguity with MemoryLocation.
PiperOrigin-RevId: 161716528
---
Commit 8e7f57371 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Expose tf.contrib.nn.rank_sampled_softmax_loss.
PiperOrigin-RevId: 161716450
---
Commit e424d209a authored by Peter Hawkins<phawkins@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[TF:XLA] Use a more numerically accurate formulation of ResourceApplyRMSProp.
PiperOrigin-RevId: 161706120
---
Commit 45a58d378 authored by Skye Wanderman-Milne<skyewm@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Introduce Python-only extensions to the C API
Implements an incomplete version of Operation._add_control_input()
using a new extension to make sure the plumbing works.
This also adds header guards to c_api_internal.h, which were missing. For some reason the missing guards caused problems in the cmake build even though there doesn't appear to be any #include cycles.
PiperOrigin-RevId: 161705859
---
Commit 4f5433634 authored by Jonathan Hseu<jhseu@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Rename TpuEstimator to TPUEstimator and TpuConfig to TPUConfig to follow PEP8
naming conventions.
PiperOrigin-RevId: 161704561
---
Commit 38180d7bb authored by Yun Peng<pcloudy@google.com>
Committed by gunan<gunan@google.com>:
Disable nn_test on Windows (#11445)
---
Commit e1de7a1b0 authored by Yun Peng<pcloudy@google.com>
Committed by gunan<gunan@google.com>:
Windows Bazel Build: Build TensorFlow with wrapper-less CROSSTOOL (#11454)
---
Commit c9d03a568 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Add tf.contrib.nn.rank_sampled_softmax_loss, a variant of tf.nn.sampled_softmax_loss that has been shown to improve rank loss. Paper: https://arxiv.org/abs/1707.03073
PiperOrigin-RevId: 161702455
---
Commit 9aa0dcbf2 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Add shape check for MakeQuantileSummariesOp.
PiperOrigin-RevId: 161698801
---
Commit 9c4da4a24 authored by vhasanov<KyotoSunshine@users.noreply.github.com>
Committed by Frank Chen<frankchn@gmail.com>:
Deleted unnecessary repetition of the same text. (#11459)
The same text was repeated two times. I deleted the repetition.
---
Commit d1e3cadda authored by DimanNe<dimanne@gmail.com>
Committed by drpngx<drpngx@users.noreply.github.com>:
Fix linking options issued by bazel in oorder to make gradients register (#11449)
---
Commit 8605f7ab8 authored by Taehoon Lee<me@taehoonlee.com>
Committed by Frank Chen<frankchn@gmail.com>:
Fix typos (#11444)
---
Commit 7c1fe9068 authored by Karl Lessard<karllessard@users.noreply.github.com>
Committed by Frank Chen<frankchn@gmail.com>:
[Java] Add base classes and utilities for operation wrappers. (#11188)
* Add base classes and utilities for operation wrappers.
* Rename Input interface to Operand
* Introduce changes after code review
---
Commit 2195db6d8 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Remove unused flag: xla_hlo_graph_for_compute_constant
PiperOrigin-RevId: 161686867
---
Commit a72fc31bc authored by Martin Wicke<martin.wicke@gmail.com>
Committed by Martin Wicke<martin.wicke@gmail.com>:
Remove tabs. Unassign contrib/framework.
---
Commit 6e74bd65a authored by Martin Wicke<martin.wicke@gmail.com>
Committed by Martin Wicke<martin.wicke@gmail.com>:
Add CODEOWNERS
Added what we know about contrib mainly, and some well-separated components.
---
Commit de546d066 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
BUILD cleanup in tensorflow/compiler/...
PiperOrigin-RevId: 161679855
---
Commit 576c7b1ec authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
BEGIN_PUBLIC
Automated g4 rollback of changelist 161218103
PiperOrigin-RevId: 161868747
|
|
|
|
|
|
|
|
|
| |
computation
is needed to properly have an end-to-end flow working for BatchNormGrad.
RELNOTES: n/a
PiperOrigin-RevId: 161856560
|
|
|
|
| |
PiperOrigin-RevId: 161851851
|
|
|
|
| |
PiperOrigin-RevId: 161847349
|
|
|
|
| |
PiperOrigin-RevId: 161834256
|
|
|
|
|
|
|
|
|
| |
- Introdue an operation expander pass which rewrites HLO into smaller ones.
- Support batch norm training rewriting in operation expander.
- Add an option in JF compiler to use operation expander to rewrite batch norm training.
RELNOTES: n/a
PiperOrigin-RevId: 161832778
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, `tf.import_graph_def()` would raise a `ValueError` if any of
the tensors named in the `input_map` was not used as an input to
another node. However, the contract for this function states that a
`ValueError` will be raised "if `input_map`... contains names that do
not appear in `graph_def`," so this change expands the valid domain of
`input_map` to include tensors that only appear as unconsumed
operation outputs in the imported graph.
PiperOrigin-RevId: 161826633
|
|
|
|
|
| |
RELNOTES: n/a
PiperOrigin-RevId: 161822988
|
|
|
|
| |
PiperOrigin-RevId: 161820876
|
|
|
|
| |
PiperOrigin-RevId: 161814496
|
|
|
|
|
|
| |
to allow retrieval of gradient tensors created by TensorFlow's automatic differentiation algorithm (i.e., tf.gradients and optimizer code that uses it).
PiperOrigin-RevId: 161805516
|
|
|
|
| |
PiperOrigin-RevId: 161800080
|
|
|
|
| |
PiperOrigin-RevId: 161788559
|
|
|
|
| |
PiperOrigin-RevId: 161785867
|
|
|
|
| |
PiperOrigin-RevId: 161785793
|
|
|
|
| |
PiperOrigin-RevId: 161781962
|
|
|
|
| |
PiperOrigin-RevId: 161760675
|
|
|
|
| |
PiperOrigin-RevId: 161760434
|
|
|
|
| |
PiperOrigin-RevId: 161752846
|
|
|
|
| |
PiperOrigin-RevId: 161749205
|
|
|
|
| |
PiperOrigin-RevId: 161738207
|
|
|
|
| |
PiperOrigin-RevId: 161738084
|
|
|
|
|
|
|
|
|
| |
This makes it easier to implement logic like returning the size of an HloBuffer,
which requires knowing the underlying HloValues.
No functional changes; only a change of representation.
PiperOrigin-RevId: 161737042
|
|
|
|
|
|
| |
This is a potential solution to issue #2514.
PiperOrigin-RevId: 161732107
|
|
|
|
| |
PiperOrigin-RevId: 161730154
|
|
|
|
|
|
|
|
| |
I believe these were fixed with cl/161157061
tensorflow-cl-gpu-pip passing: https://ci.tensorflow.org/job/tensorflow-cl-presubmit-multijob/14043/
PiperOrigin-RevId: 161729658
|
|
|
|
| |
PiperOrigin-RevId: 161727345
|
|
|
|
|
|
| |
Also add support for complex types.
PiperOrigin-RevId: 161726749
|
|
|
|
| |
PiperOrigin-RevId: 161726324
|
|
|
|
|
|
| |
remote execution (specifically for cross OS compilation)
PiperOrigin-RevId: 161719899
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this change, you can do the following:
1. Fetch a string handle for any iterator, by evaluating the result of
`Iterator.string_handle()`.
2. Define an `Iterator` object based on a `tf.string` placeholder handle.
3. Feed the placeholder using an evaluated string handle to use a particular
iterator in a particular step.
Concretely, this allows you to define two iterators for a training dataset and
a test dataset, and choose which one to use on a per-run basis:
```python
train_iterator = tf.contrib.data.Dataset(...).make_one_shot_iterator()
train_iterator_handle = sess.run(train_iterator.string_handle())
test_iterator = tf.contrib.data.Dataset(...).make_one_shot_iterator()
test_iterator_handle = sess.run(test_iterator.string_handle())
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.contrib.data.Iterator.from_string_handle(
handle, train_iterator.output_types)
next_element = iterator.get_next()
loss = f(next_element)
train_loss = sess.run(loss, feed_dict={handle: train_iterator_handle})
test_loss = sess.run(loss, feed_dict={handle: test_iterator_handle})
```
PiperOrigin-RevId: 161719836
|
|
|
|
|
|
|
|
|
|
| |
is built from source with XLA support. See Github issue #11122.
The priority of the executor backend is set to be higher than the default (50)
and CPUs (<100), and is therefore selected as the default when tf.device is not
explicitly specified.
PiperOrigin-RevId: 161717173
|
|
|
|
| |
PiperOrigin-RevId: 161716528
|
|
|
|
| |
PiperOrigin-RevId: 161716450
|
|
|
|
| |
PiperOrigin-RevId: 161706120
|