| Commit message (Collapse) | Author | Age |
... | |
|
|
|
| |
Change: 117865583
|
|
|
|
| |
Change: 117862980
|
|
|
|
| |
Change: 117859536
|
|
|
|
| |
Change: 117854099
|
|
|
|
| |
Change: 117849583
|
|
|
|
| |
Change: 117839211
|
|
|
|
|
| |
We want explicit size arguments for all tests.
Change: 117835174
|
|
|
|
|
|
| |
integer and adjusting some tests accordingly
Change: 117834924
|
|
|
|
|
| |
We want explicit size arguments for all tests.
Change: 117832476
|
|
|
|
| |
Change: 117831771
|
|
|
|
|
|
|
|
|
| |
This prevents errors when large tensor constants are added to the
graph, large values are fed to a step, and large values are fetched
from a step.
Thanks to @ms705 for uncovering this issue.
Change: 117830058
|
|
|
|
| |
Change: 117828297
|
|
|
|
| |
Change: 117827602
|
|
|
|
| |
Change: 117825911
|
|
|
|
|
|
|
|
|
|
| |
Note that this is only the type, not support for it in any ops,
so it is not useful for anything yet. In particular,
neither TF_CALL_REAL_NUMBER_TYPES nor TF_CALL_GPU_NUMBER_TYPES
list Eigen::half, so even though a lot of ops will end up
declaring support for the new type, calling them will fail at
runtime.
Change: 117825461
|
|
|
|
| |
Change: 117796203
|
|
|
|
|
|
| |
Now, the signature is (..., name=None) and variable scope uses name and
default_name correctly, allowing the default name to be uniquified.
Change: 117787114
|
|
|
|
|
|
|
| |
Previously only float and double variables were allowed on the GPU. Now
anything other than string works. The same goes for Assign. AssignAdd
and AssignSub have been left alone for now.
Change: 117786325
|
|
|
|
|
| |
This CL provides a CPU kernel implementation.
Change: 117778151
|
|
|
|
|
|
| |
This should be everything except for except for python, tensorboard, cc,
contrib, and tools.
Change: 117774679
|
|
|
|
|
| |
fraction processed. Also avoid one case of int overflow.
Change: 117774225
|
|
|
|
| |
Change: 117767826
|
|
|
|
| |
Change: 117767157
|
|
|
|
|
|
| |
was originally defined.
Change: 117766780
|
|
|
|
|
| |
computations to use 64 bits, to avoid overflow.
Change: 117766779
|
|
|
|
|
|
|
| |
Reason: it can be confusing to have session.run(ops, run_outputs=...) as users might get confused that run_outputs contains the outputs of the ops that are passed.
Or in the C++ API it is even more confusing since we have Session.Run(..., std::vector<Tensor>* outputs, RunOutputs* run_outputs)
Change: 117764542
|
|
|
|
|
| |
decoded are within a reasonable bounds.
Change: 117761753
|
|
|
|
|
| |
underlying library.
Change: 117761679
|
|
|
|
| |
Change: 117756292
|
|
|
|
| |
Change: 117755581
|
|
|
|
|
|
| |
validation that all output is written (or an error is generated) for
all the SegmentReduction ops.
Change: 117748193
|
|
|
|
|
|
|
| |
Firing custom "rendered" event for the event, image, histogram and graph dashboards as a way to know that everything has rendered properly.
The graph dashboard fires "rendered" after some custom d3 logic, while the other components fire it when attached, using async, as suggested by the migration guide: https://www.polymer-project.org/1.0/docs/migration.html#domready
Change: 117741252
|
|
|
|
|
|
|
|
|
|
| |
calls.
Move selective registration macros to their own file - this made it easier to
#include <string.h> before the definition. Renamed SHOULD_REGISTER_OP to
SHOULD_REGISTER_OP_KERNEL, and used SHOULD_REGISTER_OP for the new op.h
filtering.
Change: 117738751
|
|
|
|
|
|
|
|
|
|
| |
It is no longer necessary to build `grpc_tensorflow_server` to use the
distributed runtime. All dependencies are now included when you
`import tensorflow` in Python.
This change also adds a convenience method for creating an in-process
server that binds to any available port.
Change: 117736352
|
|
|
|
|
|
| |
* Add 1D and 3D non-batched FFT ops.
* Add 1, 2, and 3D batched FFT ops.
Change: 117726732
|
|
|
|
| |
Change: 117717701
|
|
|
|
|
| |
Fixes #1569
Change: 117716771
|
|
|
|
| |
Change: 117682109
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This implements "bool strictness" for TensorFlow, which is intended to
improve usability. It removes the ambiguity between testing to see if
a tensor is defined (which should be done using `is None`/`is not
None` tests), and whether it evaluates to `True` (which should be done
using explicity logical TensorFlow operations).
See Issue #1454 for more details.
IF THIS BREAKS YOU
------------------
Replace all uses of `if tensor:` with `if tensor is not None:` and all
uses of `if not tensor:` with `if tensor is None:`.
Change: 117676227
|
|
|
|
|
| |
here since it's a test and the value is 10).
Change: 117630192
|
|
|
|
| |
Change: 117630187
|
|
|
|
| |
Change: 117611495
|
|
|
|
|
|
|
| |
third_party/eigen3 copy
to being part of TF, add tests."
Change: 117608627
|
|
|
|
| |
Change: 117608343
|
|
|
|
|
|
| |
Make histogram_ops visible
Make histogram_ops.histogram_fixed_width return histogram derived from current inputs only, rather than accumulate
Change: 117602117
|
|
|
|
| |
Change: 117601377
|
|
|
|
| |
Change: 117599224
|
|
|
|
|
| |
Move the GPU-neutral code to common_runtime.
Change: 117591254
|
|
|
|
| |
Change: 117590857
|
|
|
|
| |
Change: 117590840
|