| Commit message (Collapse) | Author | Age |
|
|
|
| |
Change: 140374197
|
|
|
|
|
|
|
|
| |
"non_max_suppression_op.cc",
"non_max_suppression_op.h",
"one_hot_op.cc",
"one_hot_op.h"
Change: 140364162
|
|
|
|
| |
Change: 140362909
|
|
|
|
|
|
| |
Any value other than None fails, because dynamic_rnn_estimator's model_fn
does not accept a params argument.
Change: 140351422
|
|
|
|
|
| |
no loss and eval_metrics if INFER. Without this, .export() crashes.
Change: 140337700
|
|
|
|
|
|
| |
OneHotCategorical distribution classes (along with some useful variants) to tf.contrib.distributions. These distributions were concurrently introduced by https://arxiv.org/abs/1611.00712 and https://arxiv.org/abs/1611.01144.
Change: 140336235
|
|
|
|
| |
Change: 140334954
|
|
|
|
| |
Change: 140308941
|
|
|
|
| |
Change: 140308750
|
|
|
|
|
|
| |
dimensions and adding a test.
Change: 140222390
|
|
|
|
| |
Change: 140215986
|
|
|
|
| |
Change: 140215361
|
|
|
|
|
|
| |
recursive_create_dir.
Change: 140127945
|
|
|
|
|
|
| |
global_step/sec.
Change: 140114701
|
|
|
|
| |
Change: 140098006
|
|
|
|
|
| |
Combine train and eval loss fn (since they're always the same).
Change: 140088698
|
|
|
|
| |
Change: 140088388
|
|
|
|
|
|
| |
a corner case of nested cond and while loop.
Change: 140083287
|
|
|
|
| |
Change: 140081034
|
|
|
|
|
| |
Add name to prediction ops.
Change: 140080563
|
|
|
|
| |
Change: 140079490
|
|
|
|
|
|
| |
where it included in a huge ton of targets, and into a new target just for
"debug_ops" where it is needed.
Change: 140079200
|
|
|
|
| |
Change: 140073799
|
|
|
|
|
| |
Fixes #5738.
Change: 140072838
|
|
|
|
| |
Change: 140072478
|
|
|
|
|
|
| |
tf_opts_nortti_if_android().
Change: 140071090
|
|
|
|
| |
Change: 140070326
|
|
|
|
|
|
|
|
|
|
|
|
| |
Prior to this change, TensorArrays were always created on the device set
by the device scope (if any); which is not necessarily the device on which
the Tensors written to the given TensorArray sit. Since TensorArrays have
strong colocation requirements, this often meant expensive round-trips to
write and read Tensors. With this change, TensorArray ops are created with
no device set; and the first call to write/unpack/split to a TensorArray with a
Tensor bound to a particular device will set the TensorArray's device to
match.
Change: 140067532
|
|
|
|
| |
Change: 140067123
|
|
|
|
|
|
| |
up build
Change: 140066519
|
|
|
|
| |
Change: 140065785
|
|
|
|
| |
Change: 140064894
|
|
|
|
| |
Change: 140063357
|
|
|
|
| |
Change: 140062662
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the run-up to TF 1.0, we are making RNNCells' variable names compatible
with those of tf layers.
This is a breaking change for those who wish to reload their old RNN model
checkpoints in newly created graphs. After this change is in, variables
created with RNNCells will have slightly different names than before;
loading old checkpoints to run with newly created graphs requires
renaming at load time.
Loading and executing old graphs with old checkpoints will continue to work
without any problems. Creating and loading new checkpoints with graphs
after this change is in will work without any problems. The only people
affected by this change are those who want to load old RNN model checkpoints
graphs created after this change is in.
Renaming on checkpoint load can be performed with
tf.contrib.framework.variables.assign_from_checkpoint. Example usage
is available here[1] if you use Saver and/or Supervisor, and [2] if you
are using the newer tf.learn classes.
Examples of renamed parameters:
LSTMCell without sharding:
my_scope/LSTMCell/W_0 -> my_scope/lstm_cell/weights
my_scope/LSTMCell/W_F_diag -> my_scope/lstm_cell/w_f_diag
my_scope/LSTMCell/B -> my_scope/lstm_cell/biases
LSTMCell with sharding:
my_scope/LSTMCell/W_0 -> my_scope/lstm_cell/weights/part_0
my_scope/LSTMCell/W_1 -> my_scope/lstm_cell/weights/part_1
my_scope/LSTMCell/W_2 -> my_scope/lstm_cell/weights/part_2
my_scope/LSTMCell/W_F_diag -> my_scope/lstm_cell/w_f_diag
my_scope/LSTMCell/B -> my_scope/lstm_cell/biases
BasicLSTMCell:
my_scope/BasicLSTMCell/Linear/Matrix -> my_scope/basic_lstm_cell/weights
my_scope/BasicLSTMCell/Linear/Bias -> my_scope/basic_lstm_cell/biases
MultiRNNCell:
my_scope/MultiRNNCell/Cell0/LSTMCell/W_0 -> my_scope/multi_rnn_cell/cell_0/lstm_cell/weights
my_scope/MultiRNNCell/Cell0/LSTMCell/W_F_diag -> my_scope/multi_rnn_cell/cell_0/lstm_cell/w_f_diag
my_scope/MultiRNNCell/Cell0/LSTMCell/B -> my_scope/multi_rnn_cell/cell_0/lstm_cell/biases
1.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/slim/README.md
2. https://github.com/tensorflow/tensorflow/blob/86f5ab7474825da756838b34e1b4eac93f5fc68a/tensorflow/contrib/framework/python/ops/variables_test.py#L810
Change: 140060366
|
|
|
|
|
|
|
|
| |
the face of changes to malloc().
This change both adjusts the numbers and adds 5kBytes of slop on either side of
the numbers in the hope of avoiding trivial breakage.
Change: 140060311
|
|
|
|
|
|
|
|
| |
This code implements a component to read table partitions from BigQuery using the
HTTP API for BigQuery. It specifically uses tabledata.list mehtod. This implementation
uses Application Default Credentials (similar to GCS Filesystem). The current
implementation does not support nested types.
Change: 140058933
|
|
|
|
| |
Change: 140058389
|
|
|
|
|
|
| |
shape of the elements. If `elem_shape` is not None, we will check for shape equality for all writes and return elem_shape for all reads.
Change: 140056114
|
|
|
|
| |
Change: 140055398
|
|
|
|
| |
Change: 140054781
|
|
|
|
| |
Change: 140053628
|
|
|
|
|
| |
Allow resetting colocation groups via colocate_with(None, ignore...=True)
Change: 140052949
|
|
|
|
| |
Change: 140048087
|
|
|
|
| |
Change: 140046076
|
|
|
|
|
|
| |
on numbers of logical CPUs on the host system
Change: 140045246
|
|
|
|
| |
Change: 140044561
|
|
|
|
|
|
|
| |
Added test.
Fixes #5807.
Change: 140042698
|
|
|
|
| |
Change: 140034297
|
|
|
|
| |
Change: 140030944
|