aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
...
* Add support for generating Hann and Hamming windows to tf.contrib.signal.Gravatar RJ Ryan2017-07-13
| | | | PiperOrigin-RevId: 161891114
* Disable contrib tests on ASAN that are sometimes timing out.Gravatar A. Unique TensorFlower2017-07-13
| | | | PiperOrigin-RevId: 161890056
* Add "gradients" subscope when generating gradients in C/++Gravatar Skye Wanderman-Milne2017-07-13
| | | | | | This makes visualizing the graph easier and is also what Python does. PiperOrigin-RevId: 161884431
* Adding cautionary comments on the use of use_factors_weights_cache in the ↵Gravatar A. Unique TensorFlower2017-07-13
| | | | | | case where the weights are computed outside and set to the WALS object. PiperOrigin-RevId: 161882927
* Expose tf.contrib.nn.rank_sampled_softmax_loss.Gravatar A. Unique TensorFlower2017-07-13
| | | | PiperOrigin-RevId: 161879977
* Extend shape_inference::Conv2DShape to handle NCHW_VECT_C format.Gravatar Jingyue Wu2017-07-13
| | | | | | Tested Conv2DShape with NCHW_VECT_C format. PiperOrigin-RevId: 161879362
* Add clone support for BatchNormGradGravatar A. Unique TensorFlower2017-07-13
| | | | | | | This is 2nd of 4 CLs that implement BatchNormGrad. The ability to clone gives us PARALLEL_CPU support. RELNOTES: n/a PiperOrigin-RevId: 161877575
* Change ReferenceUtil::Slice{2,3,4}D to accept a strides parameter.Gravatar A. Unique TensorFlower2017-07-13
| | | | | | | | | This also fixes their ordering in xla/reference_util.h This also adds a few stride tests to reference_util_test and adds LiteralTestUtil::ExpectR4Near(). PiperOrigin-RevId: 161876759
* Throw a ValueError if the user doesn't pass in a dictionary of Tensors forGravatar Jonathan Hseu2017-07-13
| | | | | | | | | | features to canned Estimators. The current situation confuses users transitioning from the contrib Estimators because those support passing Tensor as features. Fixes #11252 PiperOrigin-RevId: 161876642
* Add c_api dep to python_api targetGravatar Skye Wanderman-Milne2017-07-13
| | | | | | c_api_internal doesn't actually export c_api.h, which python_api.h depends on. PiperOrigin-RevId: 161874954
* Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2017-07-13
| | | | PiperOrigin-RevId: 161874836
* Add equality test function.Gravatar A. Unique TensorFlower2017-07-13
| | | | | | | To test the results of compilation(aka Executable) are the same, we need a way to tell if they are equal to each other. RELNOTES: n/a PiperOrigin-RevId: 161873754
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2017-07-13
| | | | PiperOrigin-RevId: 161873229
* Add tfcompile support for --xla_dump_ir_to, to dump the LLVM IR.Gravatar A. Unique TensorFlower2017-07-13
| | | | | | | | | Also minor renaming of OptimizationCallback to ModuleHook, because it's shorter and describes the type slightly better. Also see #11462 PiperOrigin-RevId: 161869637
* Make SummaryMetadata available within the tf namespaceGravatar A. Unique TensorFlower2017-07-13
| | | | | | Previously, SummaryMetadata had been excluded from the namespace because it had been absent from a certain list. PiperOrigin-RevId: 161869618
* Merge changes from github.Gravatar Frank Chen2017-07-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | END_PUBLIC --- Commit fe5338177 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Go: Update generated wrapper functions for TensorFlow ops. PiperOrigin-RevId: 161727345 --- Commit c65f69119 authored by Eugene Brevdo<ebrevdo@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Factor out DenseUpdate ops into dense_update_functor build dep. Also add support for complex types. PiperOrigin-RevId: 161726749 --- Commit 9a172989e authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Update ops-related pbtxt files. PiperOrigin-RevId: 161726324 --- Commit fd5530d6e authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: adding bazel-toolchains repo to workspace. This repo will be necessary for remote execution (specifically for cross OS compilation) PiperOrigin-RevId: 161719899 --- Commit 71c4ec8ed authored by Derek Murray<mrry@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add a mechanism for switching between multiple iterators by feeding a handle. With this change, you can do the following: 1. Fetch a string handle for any iterator, by evaluating the result of `Iterator.string_handle()`. 2. Define an `Iterator` object based on a `tf.string` placeholder handle. 3. Feed the placeholder using an evaluated string handle to use a particular iterator in a particular step. Concretely, this allows you to define two iterators for a training dataset and a test dataset, and choose which one to use on a per-run basis: ```python train_iterator = tf.contrib.data.Dataset(...).make_one_shot_iterator() train_iterator_handle = sess.run(train_iterator.string_handle()) test_iterator = tf.contrib.data.Dataset(...).make_one_shot_iterator() test_iterator_handle = sess.run(test_iterator.string_handle()) handle = tf.placeholder(tf.string, shape=[]) iterator = tf.contrib.data.Iterator.from_string_handle( handle, train_iterator.output_types) next_element = iterator.get_next() loss = f(next_element) train_loss = sess.run(loss, feed_dict={handle: train_iterator_handle}) test_loss = sess.run(loss, feed_dict={handle: test_iterator_handle}) ``` PiperOrigin-RevId: 161719836 --- Commit 6d6dda807 authored by Kay Zhu<kayzhu@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [TF:XLA] Fix an issue where plugin/Executor backend is used by default when TF is built from source with XLA support. See Github issue #11122. The priority of the executor backend is set to be higher than the default (50) and CPUs (<100), and is therefore selected as the default when tf.device is not explicitly specified. PiperOrigin-RevId: 161717173 --- Commit 6b28eb084 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Rename HloLocation to HloPosition, to avoid ambiguity with MemoryLocation. PiperOrigin-RevId: 161716528 --- Commit 8e7f57371 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Expose tf.contrib.nn.rank_sampled_softmax_loss. PiperOrigin-RevId: 161716450 --- Commit e424d209a authored by Peter Hawkins<phawkins@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [TF:XLA] Use a more numerically accurate formulation of ResourceApplyRMSProp. PiperOrigin-RevId: 161706120 --- Commit 45a58d378 authored by Skye Wanderman-Milne<skyewm@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Introduce Python-only extensions to the C API Implements an incomplete version of Operation._add_control_input() using a new extension to make sure the plumbing works. This also adds header guards to c_api_internal.h, which were missing. For some reason the missing guards caused problems in the cmake build even though there doesn't appear to be any #include cycles. PiperOrigin-RevId: 161705859 --- Commit 4f5433634 authored by Jonathan Hseu<jhseu@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Rename TpuEstimator to TPUEstimator and TpuConfig to TPUConfig to follow PEP8 naming conventions. PiperOrigin-RevId: 161704561 --- Commit 38180d7bb authored by Yun Peng<pcloudy@google.com> Committed by gunan<gunan@google.com>: Disable nn_test on Windows (#11445) --- Commit e1de7a1b0 authored by Yun Peng<pcloudy@google.com> Committed by gunan<gunan@google.com>: Windows Bazel Build: Build TensorFlow with wrapper-less CROSSTOOL (#11454) --- Commit c9d03a568 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add tf.contrib.nn.rank_sampled_softmax_loss, a variant of tf.nn.sampled_softmax_loss that has been shown to improve rank loss. Paper: https://arxiv.org/abs/1707.03073 PiperOrigin-RevId: 161702455 --- Commit 9aa0dcbf2 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add shape check for MakeQuantileSummariesOp. PiperOrigin-RevId: 161698801 --- Commit 9c4da4a24 authored by vhasanov<KyotoSunshine@users.noreply.github.com> Committed by Frank Chen<frankchn@gmail.com>: Deleted unnecessary repetition of the same text. (#11459) The same text was repeated two times. I deleted the repetition. --- Commit d1e3cadda authored by DimanNe<dimanne@gmail.com> Committed by drpngx<drpngx@users.noreply.github.com>: Fix linking options issued by bazel in oorder to make gradients register (#11449) --- Commit 8605f7ab8 authored by Taehoon Lee<me@taehoonlee.com> Committed by Frank Chen<frankchn@gmail.com>: Fix typos (#11444) --- Commit 7c1fe9068 authored by Karl Lessard<karllessard@users.noreply.github.com> Committed by Frank Chen<frankchn@gmail.com>: [Java] Add base classes and utilities for operation wrappers. (#11188) * Add base classes and utilities for operation wrappers. * Rename Input interface to Operand * Introduce changes after code review --- Commit 2195db6d8 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Remove unused flag: xla_hlo_graph_for_compute_constant PiperOrigin-RevId: 161686867 --- Commit a72fc31bc authored by Martin Wicke<martin.wicke@gmail.com> Committed by Martin Wicke<martin.wicke@gmail.com>: Remove tabs. Unassign contrib/framework. --- Commit 6e74bd65a authored by Martin Wicke<martin.wicke@gmail.com> Committed by Martin Wicke<martin.wicke@gmail.com>: Add CODEOWNERS Added what we know about contrib mainly, and some well-separated components. --- Commit de546d066 authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: BUILD cleanup in tensorflow/compiler/... PiperOrigin-RevId: 161679855 --- Commit 576c7b1ec authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: BEGIN_PUBLIC Automated g4 rollback of changelist 161218103 PiperOrigin-RevId: 161868747
* This is 1st of 4 CLs to implement BatchNormGrad. Various support in user ↵Gravatar A. Unique TensorFlower2017-07-13
| | | | | | | | | computation is needed to properly have an end-to-end flow working for BatchNormGrad. RELNOTES: n/a PiperOrigin-RevId: 161856560
* Automated g4 rollback of changelist 161781962Gravatar Eugene Brevdo2017-07-13
| | | | PiperOrigin-RevId: 161851851
* Adding tree per class multiclass strategy handling.Gravatar A. Unique TensorFlower2017-07-13
| | | | PiperOrigin-RevId: 161847349
* Initial public release of tf.distributions.Gravatar Eugene Brevdo2017-07-13
| | | | PiperOrigin-RevId: 161834256
* [BatchNorm] Use operation expander to rewrite a batch norm training op.Gravatar A. Unique TensorFlower2017-07-13
| | | | | | | | | - Introdue an operation expander pass which rewrites HLO into smaller ones. - Support batch norm training rewriting in operation expander. - Add an option in JF compiler to use operation expander to rewrite batch norm training. RELNOTES: n/a PiperOrigin-RevId: 161832778
* Make the test for "unused inputs" more precise in `tf.import_graph_def()`.Gravatar Derek Murray2017-07-13
| | | | | | | | | | | | Before, `tf.import_graph_def()` would raise a `ValueError` if any of the tensors named in the `input_map` was not used as an input to another node. However, the contract for this function states that a `ValueError` will be raised "if `input_map`... contains names that do not appear in `graph_def`," so this change expands the valid domain of `input_map` to include tensors that only appear as unconsumed operation outputs in the imported graph. PiperOrigin-RevId: 161826633
* Add a test in BatchNormalization tests.Gravatar A. Unique TensorFlower2017-07-13
| | | | | RELNOTES: n/a PiperOrigin-RevId: 161822988
* barrier_ops_test is size medium.Gravatar Eugene Brevdo2017-07-13
| | | | PiperOrigin-RevId: 161820876
* Disable barrier_ops_test that sometimes times out take 2Gravatar A. Unique TensorFlower2017-07-13
| | | | PiperOrigin-RevId: 161814496
* Add GradientsDebugger to tfdbgGravatar Shanqing Cai2017-07-13
| | | | | | to allow retrieval of gradient tensors created by TensorFlow's automatic differentiation algorithm (i.e., tf.gradients and optimizer code that uses it). PiperOrigin-RevId: 161805516
* Nest: Raise exceptions with more readable messagesGravatar A. Unique TensorFlower2017-07-13
| | | | PiperOrigin-RevId: 161800080
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2017-07-13
| | | | PiperOrigin-RevId: 161788559
* Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2017-07-13
| | | | PiperOrigin-RevId: 161785867
* Make tf.write_file recursively create the directory it is saving to.Gravatar A. Unique TensorFlower2017-07-13
| | | | PiperOrigin-RevId: 161785793
* Automated g4 rollback of changelist 161726749Gravatar A. Unique TensorFlower2017-07-13
| | | | PiperOrigin-RevId: 161781962
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2017-07-12
| | | | PiperOrigin-RevId: 161760675
* Support 16bit quantized types in tf.bitcast.Gravatar A. Unique TensorFlower2017-07-12
| | | | PiperOrigin-RevId: 161760434
* Temporarily disable barrier_ops_test that sometimes times out.Gravatar A. Unique TensorFlower2017-07-12
| | | | PiperOrigin-RevId: 161752846
* Backport of https://github.com/grpc/grpc/pull/9421 in TensorFlow.Gravatar A. Unique TensorFlower2017-07-12
| | | | PiperOrigin-RevId: 161749205
* Allow HashTableOP to map int64 to float.Gravatar A. Unique TensorFlower2017-07-12
| | | | PiperOrigin-RevId: 161738207
* Expose colocate_withGravatar A. Unique TensorFlower2017-07-12
| | | | PiperOrigin-RevId: 161738084
* Change HloBuffer, HloBufferSet and HloValueSet to hold pointers rather than ids.Gravatar A. Unique TensorFlower2017-07-12
| | | | | | | | | This makes it easier to implement logic like returning the size of an HloBuffer, which requires knowing the underlying HloValues. No functional changes; only a change of representation. PiperOrigin-RevId: 161737042
* Add a description of "feedable iterators" to the Datasets programmers' guide.Gravatar Derek Murray2017-07-12
| | | | | | This is a potential solution to issue #2514. PiperOrigin-RevId: 161732107
* [TF:XLA] Implementing ResourceGather in TF2XLA.Gravatar A. Unique TensorFlower2017-07-12
| | | | PiperOrigin-RevId: 161730154
* TFTS: Re-enable pip GPU testsGravatar Allen Lavoie2017-07-12
| | | | | | | | I believe these were fixed with cl/161157061 tensorflow-cl-gpu-pip passing: https://ci.tensorflow.org/job/tensorflow-cl-presubmit-multijob/14043/ PiperOrigin-RevId: 161729658
* Go: Update generated wrapper functions for TensorFlow ops.Gravatar A. Unique TensorFlower2017-07-12
| | | | PiperOrigin-RevId: 161727345
* Factor out DenseUpdate ops into dense_update_functor build dep.Gravatar Eugene Brevdo2017-07-12
| | | | | | Also add support for complex types. PiperOrigin-RevId: 161726749
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2017-07-12
| | | | PiperOrigin-RevId: 161726324
* adding bazel-toolchains repo to workspace. This repo will be necessary for ↵Gravatar A. Unique TensorFlower2017-07-12
| | | | | | remote execution (specifically for cross OS compilation) PiperOrigin-RevId: 161719899
* Add a mechanism for switching between multiple iterators by feeding a handle.Gravatar Derek Murray2017-07-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With this change, you can do the following: 1. Fetch a string handle for any iterator, by evaluating the result of `Iterator.string_handle()`. 2. Define an `Iterator` object based on a `tf.string` placeholder handle. 3. Feed the placeholder using an evaluated string handle to use a particular iterator in a particular step. Concretely, this allows you to define two iterators for a training dataset and a test dataset, and choose which one to use on a per-run basis: ```python train_iterator = tf.contrib.data.Dataset(...).make_one_shot_iterator() train_iterator_handle = sess.run(train_iterator.string_handle()) test_iterator = tf.contrib.data.Dataset(...).make_one_shot_iterator() test_iterator_handle = sess.run(test_iterator.string_handle()) handle = tf.placeholder(tf.string, shape=[]) iterator = tf.contrib.data.Iterator.from_string_handle( handle, train_iterator.output_types) next_element = iterator.get_next() loss = f(next_element) train_loss = sess.run(loss, feed_dict={handle: train_iterator_handle}) test_loss = sess.run(loss, feed_dict={handle: test_iterator_handle}) ``` PiperOrigin-RevId: 161719836
* [TF:XLA] Fix an issue where plugin/Executor backend is used by default when TFGravatar Kay Zhu2017-07-12
| | | | | | | | | | is built from source with XLA support. See Github issue #11122. The priority of the executor backend is set to be higher than the default (50) and CPUs (<100), and is therefore selected as the default when tf.device is not explicitly specified. PiperOrigin-RevId: 161717173
* Rename HloLocation to HloPosition, to avoid ambiguity with MemoryLocation.Gravatar A. Unique TensorFlower2017-07-12
| | | | PiperOrigin-RevId: 161716528
* Expose tf.contrib.nn.rank_sampled_softmax_loss.Gravatar A. Unique TensorFlower2017-07-12
| | | | PiperOrigin-RevId: 161716450
* [TF:XLA] Use a more numerically accurate formulation of ResourceApplyRMSProp.Gravatar Peter Hawkins2017-07-12
| | | | PiperOrigin-RevId: 161706120