| Commit message (Collapse) | Author | Age |
|\
| |
| | |
Update nightly pip wheel and history URLS after upgrade to ubuntu:16.04
|
| |
| |
| |
| |
| |
| |
| |
| | |
arguments too (#7692)
* Update doc string to indicate clip_by_value accepts tensors as min and max arguments too
* Fix typo
|
|/ |
|
|
|
|
|
|
|
|
| |
* Error message improvement
See https://github.com/tensorflow/tensorflow/issues/7675
* Spacing fix
|
|\
| |
| | |
Make learn/mnist.py work again
|
|/ |
|
|
|
| |
This has already been replaced with the nightly-android links.
|
|
|
| |
maybe slip of a pen
|
|\
| |
| | |
Branch 147845195
|
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| | |
* Fix build error, where nccl requires -lrt link option
* Remove config_setting defines in nccl.BUILD and curl.BUILD
* Fix typo in curl.BUILD
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Update shape checking logic in einsum
* Fix typo
* Make modifications on einsum be more structured and simpler
* Remove unnecessary parts
* Fix indentation
|
| | |
|
| |
| |
| |
| | |
Change: 147845195
|
| |
| |
| |
| | |
Change: 147844228
|
| |
| |
| |
| |
| |
| | |
* Update LSTMBlockCell, LSTMBlockFusedCell to use LSTMStateTuple as state
* Revert modification in return values on LSTMBlockFusedCell
|
| |
| |
| |
| | |
Change: 147837972
|
| |
| |
| | |
Adding x_transform_train = vocab_processor.fit_transform(x_train), x_transform_test = vocab_processor.transform(x_test) so that it would be easy for programmers to understand
|
|\ \
| | |
| | | |
Branch 147800865
|
| | | |
|
|/| |
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
snapshot. Variables may create another snapshot or their ref may be exposed
via public API (e.g., var.op.outputs[0] or graph.as_graph_element(var) which
happens fairly often inside libraries or collection serialization). On the
other hand, tf.gradients() use convert_to_tensor() which returns a snapshot,
and gradients were computed with respect to this particular snapshot, which
makes the gradients incorrect.
Change: 147800865
|
| |
| |
| |
| | |
Change: 147788449
|
| |
| |
| |
| | |
Change: 147788426
|
| |
| |
| |
| |
| |
| | |
instructions.
Change: 147787804
|
| |
| |
| |
| | |
Change: 147783087
|
| |
| |
| |
| | |
Change: 147781771
|
| |
| |
| |
| | |
Change: 147779725
|
| |
| |
| |
| | |
Change: 147778589
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Windows builds break on the following simplified example:
template <typename Key, ...>
class FlatSet {
public:
typedef Key value_type;
class const_iterator {
public:
typedef FlatSet::value_type value_type; // Fails on windows
};
};
The build succeeds by adding 'typename':
typedef typename FlatSet::value_type value_type; // OK on windows
<simplified log of compiler error>
flatmap.h(110): warning C4346: 'difference_type': dependent name is not a type (compiling source file ...cancellation.cc)
flatmap.h(110): note: prefix with 'typename' to indicate a type (compiling source file ...cancellation.cc)
flatmap.h(161): note: see reference to class template instantiation 'tensorflow::gtl::FlatMap<Key,Val,Hash,Eq>::iterator' being compiled (compiling source file ...cancellation.cc)
flatmap.h(376): note: see reference to class template instantiation 'tensorflow::gtl::FlatMap<Key,Val,Hash,Eq>' being compiled (compiling source file ...cancellation.cc)
flatmap.h(110): error C2061: syntax error: identifier 'difference_type' (compiling source file ...cancellation.cc)
</simplified log of compiler error>
This is a bug in the windows compiler. It is true that FlatSet::value_type is a
dependent name, but it also refers to the "current instantiation", so 'typename'
shouldn't be required. For details see:
http://en.cppreference.com/w/cpp/language/dependent_name#Current_instantiation
But it doesn't hurt to add typename; it is simply redundant.
Change: 147776071
|
| |
| |
| | |
We don't actually run these tests, so building them is a waste of time. They can also cause surprising interactions with the environment (e.g. see issue #7374), in cases where some of the test dependencies are available.
|
| |
| |
| |
| | |
Change: 147770857
|
|\ \
| | |
| | | |
Branch 147758266
|
| | |
| | |
| | |
| | |
| | |
| | | |
microseconds from milliseconds.
Change: 147764063
|
| | |
| | |
| | |
| | | |
Change: 147763615
|
| | |
| | |
| | |
| | | |
Change: 147761442
|
|/| |
| |/ |
|
| |
| |
| |
| | |
Change: 147758266
|
| |
| |
| |
| | |
Change: 147757405
|
| |
| |
| |
| |
| | |
timeouts on test infra.
Change: 147757378
|
|\ \
| | |
| | | |
Merge 1.0.0 back to master
|
|\ \ \
| | | |
| | | | |
Branch 147741833
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is a port of the Keras get_output_shape_for layer method; and the name
change was discussed with Francois first.
Implemented this method for the tf.layers.Dense class.
Change: 147753001
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
closure.
Also fixes a potential memory leak, where the worker-side state of a failed Run() call
would not be cleaned up.
Change: 147752067
|
| | | |
| | | |
| | | |
| | | |
| | | | |
There is an hidden dependency between when 'apply_gradient' and get_chief_queue_runner() are called. This cl postpones creation of the queue to the initialization of the Session. In Estimator, Session is created after forming the graph/training-op. That means it is after the apply_gradient.
Change: 147746938
|
| | |\ \ |
|
|/| | | |
| | |_|/
| |/| | |
|
| |/ /
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
operators on a device into a struct.
No functional changes.
Change: 147741833
|
| | | |
|