| Commit message (Collapse) | Author | Age |
|
|
|
|
| |
so it runs faster (11s on tsan instead of ~30s).
Change: 147893428
|
|
|
|
| |
Change: 147892613
|
|
|
|
|
|
| |
environment variables. Add benchmark_util library that would use environemnt
variable to decide on a storage location.
Change: 147890534
|
|
|
|
| |
Change: 147888410
|
|
|
|
|
|
| |
hexagon
Change: 147886830
|
|
|
|
|
|
| |
informative error message. Use queues or TensorArray to feed tensors into a while loop.
Change: 147884461
|
|
|
|
| |
Change: 147881489
|
|
|
|
|
| |
1D convolutions are executed as 2D convolution with an extra singleton dimension.
Change: 147877435
|
|
|
|
|
|
| |
override it and do something different if needed.
Change: 147875810
|
|
|
|
| |
Change: 147871989
|
|
|
|
| |
Change: 147867469
|
|
|
|
|
|
|
|
| |
A rematerialized instruction is cloned one or more times, and the original
rematerialized instruction may be deleted if no uses remain. Previously
this duplication (or reduction) of total computation was not accounted for
in the rematerialization cost analysis.
Change: 147864843
|
|
|
|
|
|
|
|
|
|
|
| |
TensorFlow uses bazel to build and test.
However, the TensorFlow Go API is targeted for use with the 'go' tool.
This commit:
- Adds a shell test so that usage with the 'go' tool can be tested with
'bazel test //tensorflow/go/...'
- Installs Go in the images used in the continuous build
Change: 147864583
|
|
|
|
| |
Change: 147861747
|
|
|
|
| |
Change: 147858835
|
|
|
|
| |
Change: 147858700
|
|
|
|
| |
Change: 147853458
|
|
|
|
|
| |
Have the backend reload 5 seconds after the last reload finishes (instead of 60) and have the frontend refresh every 30 seconds.
Change: 147848903
|
|
|
|
| |
Change: 147845195
|
|
|
|
| |
Change: 147844228
|
|
|
|
| |
Change: 147837972
|
|
|
|
|
|
|
|
|
|
| |
snapshot. Variables may create another snapshot or their ref may be exposed
via public API (e.g., var.op.outputs[0] or graph.as_graph_element(var) which
happens fairly often inside libraries or collection serialization). On the
other hand, tf.gradients() use convert_to_tensor() which returns a snapshot,
and gradients were computed with respect to this particular snapshot, which
makes the gradients incorrect.
Change: 147800865
|
|
|
|
| |
Change: 147788449
|
|
|
|
| |
Change: 147788426
|
|
|
|
|
|
| |
instructions.
Change: 147787804
|
|
|
|
| |
Change: 147783087
|
|
|
|
| |
Change: 147781771
|
|
|
|
| |
Change: 147779725
|
|
|
|
| |
Change: 147778589
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Windows builds break on the following simplified example:
template <typename Key, ...>
class FlatSet {
public:
typedef Key value_type;
class const_iterator {
public:
typedef FlatSet::value_type value_type; // Fails on windows
};
};
The build succeeds by adding 'typename':
typedef typename FlatSet::value_type value_type; // OK on windows
<simplified log of compiler error>
flatmap.h(110): warning C4346: 'difference_type': dependent name is not a type (compiling source file ...cancellation.cc)
flatmap.h(110): note: prefix with 'typename' to indicate a type (compiling source file ...cancellation.cc)
flatmap.h(161): note: see reference to class template instantiation 'tensorflow::gtl::FlatMap<Key,Val,Hash,Eq>::iterator' being compiled (compiling source file ...cancellation.cc)
flatmap.h(376): note: see reference to class template instantiation 'tensorflow::gtl::FlatMap<Key,Val,Hash,Eq>' being compiled (compiling source file ...cancellation.cc)
flatmap.h(110): error C2061: syntax error: identifier 'difference_type' (compiling source file ...cancellation.cc)
</simplified log of compiler error>
This is a bug in the windows compiler. It is true that FlatSet::value_type is a
dependent name, but it also refers to the "current instantiation", so 'typename'
shouldn't be required. For details see:
http://en.cppreference.com/w/cpp/language/dependent_name#Current_instantiation
But it doesn't hurt to add typename; it is simply redundant.
Change: 147776071
|
|
|
|
| |
Change: 147770857
|
|
|
|
|
|
| |
microseconds from milliseconds.
Change: 147764063
|
|
|
|
| |
Change: 147763615
|
|
|
|
| |
Change: 147761442
|
|
|
|
| |
Change: 147758266
|
|
|
|
| |
Change: 147757405
|
|
|
|
|
| |
timeouts on test infra.
Change: 147757378
|
|
|
|
|
|
|
|
| |
This is a port of the Keras get_output_shape_for layer method; and the name
change was discussed with Francois first.
Implemented this method for the tf.layers.Dense class.
Change: 147753001
|
|
|
|
|
|
|
|
| |
closure.
Also fixes a potential memory leak, where the worker-side state of a failed Run() call
would not be cleaned up.
Change: 147752067
|
|
|
|
|
| |
There is an hidden dependency between when 'apply_gradient' and get_chief_queue_runner() are called. This cl postpones creation of the queue to the initialization of the Session. In Estimator, Session is created after forming the graph/training-op. That means it is after the apply_gradient.
Change: 147746938
|
|
|
|
|
|
|
| |
operators on a device into a struct.
No functional changes.
Change: 147741833
|
|
|
|
| |
Change: 147738820
|
|
|
|
|
|
|
|
|
| |
- Add a unittest for TF_SessionPRun
- Add TF_DeletePRunHandle.
This provides a way for callers to safely delete the handle
(without knowing the details of the allocator used by TF_SessionPRunSetup)
and also provides a placeholder for cancellation of PRun state in the future.
Change: 147738191
|
|
|
|
| |
Change: 147737818
|
|
|
|
| |
Change: 147734454
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a collection of annoyances that prevented Flat{Set,Map} from being a
pure drop-in replacement for std::unordered_{set,map}. I haven't exhaustively
ensured compatibility; I started with stuff I've actually run into:
* Construction via std::initializer_list
FlatSet<int> set({1, 2, 3});
FlatMap<int, int> map({{1, 10}, {2, 20}, {3, 30}});
* Define an iterator_category for compatibility with std algorithms. E.g. the
code below would yield a compile error with the useful bit truncated, and
you'd need to find and set --cxxopt=-fshow-overloads=all to figure it out.
FlatSet<int> set({1, 2, 3});
vector<int> vec(set.begin(), set.end()); // used to fail, now works
* Defining the iterators with forward_iterator_tag requires postfix ++
Admittedly I haven't actually run into this, but it's easy to add.
* Previously FlatSet::iterator allowed mutation of the set keys, which would
corrupt the internal representation. I've taken the standard approach of
defining iterator as an alias of const_iterator, so that we'll now get a
compile error if you mistakenly do this. I haven't actually run into this
either, but it seems like a worthwhile change.
Change: 147731813
|
|
|
|
|
|
| |
slices to include the case when the source and destination have the same shape and are full slices with respect to their shape.
Change: 147726529
|
|
|
|
|
|
|
|
| |
Previously rematerialization issued a warning unconditionally that
rematerialization failed to reduce memory below the specified limit.
With this change, the warning is emitted only if rematerialization
actually failed.
Change: 147724519
|
|
|
|
| |
Change: 147678642
|
|
|
|
| |
Change: 147676615
|