| Commit message (Collapse) | Author | Age |
|
|
|
| |
Change: 123901292
|
|
|
|
| |
Change: 123900938
|
|
|
|
| |
Change: 123900456
|
|
|
|
| |
Change: 123898834
|
|
|
|
|
|
| |
Add linear, relu, relu6.
Remove legacy_relu6 and legacy_convolution2d.
Change: 123898222
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(nodes with one non-ref output and one consumer), and places it
preferentially with its consumer.
For example:
assign
/ \
var input
In the above graph, assign is bound to the device of 'var' due to the
reference edge. This heuristic binds 'input' to the same device as
the assign, because it has only one consumer.
This addresses the general problem of colocating initializers with
their variables, and similar other cases. There are very few reasons
to want to place the 'input' on a node other than its consumer (there
are some contrived cases, but that's why this is a heuristic).
This CL adds a test case for this small example above, illustrative
of the general problem.
An extension of this CL would be to do the same thing not just for
single output / single consumer nodes, but whenever all out edges of
a node connect to the same 'colcoation group'.
Change: 123896863
|
|
|
|
| |
Change: 123889304
|
|
|
|
|
| |
Improve docs for losses and metrics.
Change: 123889291
|
|
|
|
| |
Change: 123889091
|
|
|
|
| |
Change: 123888258
|
|
|
|
| |
Change: 123887885
|
|
|
|
| |
Change: 123887298
|
|
|
|
|
| |
Required for fixing the jenkins build.
Change: 123886969
|
|
|
|
|
|
| |
This is the result of the investigation of
https://github.com/tensorflow/tensorflow/issues/2540
Change: 123879085
|
|
|
|
|
| |
Otherwise, the non-chiefs will write event files, which confuses TensorBoard.
Change: 123874316
|
|
|
|
|
| |
CL 123201123 moved numpy from install_deb_packages.sh to install_pip_packages.sh. So numpy needs to be added back to the TensorBoard install list in this file.
Change: 123864373
|
|
|
|
| |
Change: 123860431
|
|
|
|
| |
Change: 123859598
|
|
|
|
| |
Change: 123858823
|
|
|
|
| |
Change: 123856343
|
|
|
|
| |
Change: 123831122
|
|
|
|
| |
Change: 123824296
|
|
|
|
| |
Change: 123823861
|
|
|
|
| |
Change: 123823505
|
|
|
|
| |
Change: 123823213
|
|
|
|
| |
Change: 123821982
|
|
|
|
| |
Change: 123821418
|
|
|
|
|
|
| |
base class to Classifier and Regressor.
Change: 123818693
|
|
|
|
| |
Change: 123811386
|
|
|
|
|
|
|
|
|
| |
list of extra cuda capabilities must be passed through in a sequence separated
by commas.
With bazel, the build command arguments to include "3.0" is:
--copt=-DTF_EXTRA_CUDA_CAPABILITIES=3.0
Change: 123810335
|
|
|
|
| |
Change: 123808718
|
|
|
|
| |
Change: 123806454
|
|
|
|
|
| |
Fixes #2575.
Change: 123805910
|
|
|
|
|
|
| |
trick work when TF is imported as a submodule.
Change: 123805260
|
|
|
|
| |
Change: 123801187
|
|
|
|
|
|
| |
auxiliary test method make_dense_variable_dict. These 2 variables are nowhere used in SDCASolver. Right now this auxiliary method is more confusing than helpful.
Change: 123799466
|
|
|
|
| |
Change: 123796192
|
|
|
|
| |
Change: 123791949
|
|
|
|
|
| |
registration.
Change: 123787111
|
|
|
|
| |
Change: 123786681
|
|
|
|
|
|
|
| |
In particular, I ran into a case where `tf.reduce_mean(.., None)`
was not properly covered, and that surprised me during some other
related change.
Change: 123775966
|
|
|
|
| |
Change: 123769695
|
|
|
|
|
|
|
|
| |
one from library-construction time. This is to properly support inter-op thread
pools with the function library.
Also change testlib graph construction to pass through the Graph's op_registry.
Change: 123761344
|
|
|
|
|
|
|
| |
serialize_tensorboard script
to write run metadata to a separate file.
Change: 123756242
|
|
|
|
| |
Change: 123716704
|
|
|
|
| |
Change: 123716189
|
|
|
|
|
|
| |
ShapeN may have many inputs, so just looking at the first one is probably
not the right thing to do.
Change: 123714692
|
|
|
|
| |
Change: 123712620
|
|
|
|
| |
Change: 123710536
|
|
|
|
|
|
| |
shape to be inferred as well as let's us avoid setting the static shape of the
result.
Change: 123710096
|