| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
allocated once in the OpKernelContext::Params struct, then re-used every time a new OpKernelContext uses the Params. Thus in the executor, as long as there is more work to do the PerOpGpuDevice is not freed.
Change: 112909215
|
|
|
|
|
| |
no outputs. Fixes #856
Change: 112903800
|
|
|
|
| |
Change: 112903292
|
|
|
|
|
|
|
|
| |
We still haven't advanced to the scalar strict GraphDef version, but this
change will prevent some (not all) new Python scripts from violating scalar
strictness. That is, it will significantly reduce the amount of bit rot
between now and when I finally get full scalar strictness submitted.
Change: 112866199
|
|
|
|
| |
Change: 112846663
|
|
|
|
| |
Change: 112846648
|
|
|
|
|
|
| |
one from the set when we find one we want is cheaper. Slight performance
improvement (~0.3% on ptb_word_lm model on my desktop).
Change: 112832451
|
|
|
|
| |
Change: 112830289
|
|
|
|
|
|
| |
step. Fixes #837
Change: 112829709
|
|
|
|
|
|
| |
new TensorFlow backend.
Change: 112826468
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* The "core_cpu_internal" build target no longer includes files from the
common_runtime/gpu/ directory.
* tensorflow/core internal targets instead can get access to those headers via
the "gpu_runtime" target.
* The class "CopyTensor" is introduced. It lives in common_runtime/
but supports registration of copy functions so the "gpu_runtime"
target can add a GPU->GPU copy ability if it is linked in.
This registration should make it easier to add more device types
in the future.
* The "core_cpu" and "core_cpu_internal" build targets no longer
reference GPUUtil::CopyViaDMA; rendezvous_mgr uses CopyTensor
instead.
Also the "copy_tensor" build target was not needed.
Change: 112821119
|
|
|
|
|
|
| |
* Move some checks out of inner loops
* Split the mapper in 2: a base mapper, and a sub-mapper. This reduces the number of variables that are contained in the base mapper and helps reduce register spills
Change: 112809881
|
|
|
|
|
|
| |
size of a CNN.
Change: 112809773
|
|
|
|
| |
Change: 112808739
|
|
|
|
|
|
|
|
|
|
| |
If multiple steps are blocked on the same queue, there is a high
chance that their cancellation tokens will collide, since these are
dense and start at 0. This can lead to the wrong step being cancelled.
This change additionally uses the `CancellationManager*` to uniquely
identify an attempt for the purposes of cancellation.
Change: 112801810
|
|
|
|
|
|
| |
histogram_fixed_width, which updates a histogram Variable with new_values."
Change: 112800368
|
|
|
|
|
|
| |
The two functions already have the same behavior, and ShortDebugString
will disappear soon.
Change: 112793490
|
|
|
|
| |
Change: 112755081
|
|
|
|
|
| |
These are undocumented features and the API can and will change.
Change: 112733605
|
|
|
|
|
| |
all .h from common_runtime/gpu/.
Change: 112732380
|
|
|
|
|
|
| |
updates a histogram Variable with new_values.
Change: 112728652
|
|
|
|
|
| |
After this we can replace port.h with types.h.
Change: 112727463
|
|
|
|
|
|
| |
The standard Graph::ToGraphDef function does not correctly handle the graphs produced by TensorFlow function expansion, e.g. inlining often produces duplicate node names.
This CL also adds a parameter which can be used to produce more legible node names which help when viewing the GraphDef in the TensorFlow graph visualizer.
Change: 112719928
|
|
|
|
| |
Change: 112709520
|
|
|
|
| |
Change: 112703190
|
|
|
|
| |
Change: 112699191
|
|
|
|
| |
Change: 112688071
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
to discard out of order events only within a particular tag. This was changed because race conditions in supervisor was causing many events to be unintentionally discarded."
Change: 112644077
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. There is a new tf.unsupported module to hold things which some people use
but which we don't yet support.
2. tf.tensor_util.ConstantValue is now tf.unsupported.constant_value. Most
users use this, but tf.tensor_util.ConstantValue is still available; it
will be removed in a following commit.
3. tensor_util.MakeTensorShapeProto is now make_tensor_shape_proto. It looks
like all users of this access the tensor_util module directly (not through
tf), so for now it is not in unsupported.
This commit does not remove tensor_util from tf.__all__; a few more downstream
users must be changed before that can happen.
Change: 112626961
|
|
|
|
|
|
|
| |
we copy the original files to their new location and make the public/
versions #include the new location. Once all references are updated
to point to the new location, we can delete the originals in public/.
Change: 112622561
|
|
|
|
|
|
|
|
|
|
|
| |
Previously this would pass on the exception message from
`TensorShape.merge_with()`, which is cryptic for users who don't (and
shouldn't need to) understand how shape inference works. This would
arise, for example, in the error message for `tf.matmul()` when passed
a non-matrix, as noted here:
http://stackoverflow.com/questions/34908033/tensorflow-exception-with-matmul
Change: 112621185
|
|
|
|
| |
Change: 112615357
|
|
|
|
| |
Change: 112611994
|
|
|
|
| |
Change: 112611228
|
|
|
|
|
|
|
|
|
|
| |
the public section in a sensible order (smallest to largest, matching
the documentation comment).
Plus:
* Update documentation to reflect that test_main is not public.
* Remove "friends" package_group now that it is unused.
Change: 112605117
|
| |
|
|
|
|
| |
Change: 112591828
|
|
|
|
|
|
|
|
|
| |
The old behavior of DebugString is needlessly verbose and is quite confusing
for scalar shapes (it produced the empty string). Now DebugString is the same
as ShortDebugString.
A future commit will remove ShortDebugString.
Change: 112590646
|
|
|
|
| |
Change: 112523833
|
|
|
|
| |
Change: 112505342
|