| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
API. These args will be used for distributed variables.
Migrate all usages of `tower_local_var_scope` to using the new args.
PiperOrigin-RevId: 203855963
|
|
|
|
| |
PiperOrigin-RevId: 201652888
|
|
|
|
| |
PiperOrigin-RevId: 201204573
|
|
|
|
| |
PiperOrigin-RevId: 201152785
|
|
|
|
|
|
|
|
| |
This makes it more convenient to use layer of different dtypes in a model. Instead of having to manually cast intermediate tensors between layers of different dtypes, they will automatically be casted.
This is also useful for the upcoming mixed precision API.
PiperOrigin-RevId: 200783477
|
|
|
|
|
|
|
| |
Revert #18413. Too many internal test failures due to the name scope change caused by this change.
Revert #18192. Cannot use re2::StringPiece internally. Need alternative for set call. Will pull and clean this up in a separate change.
PiperOrigin-RevId: 197991247
|
|
|
|
|
|
| |
dtype.
PiperOrigin-RevId: 192634133
|
|
|
|
|
|
|
|
|
|
|
|
| |
- tf.layers layers now subclasses tf.keras.layers layers.
- tf.keras.layers is now agnostic to variable scopes and global collections (future-proof). It also uses ResourceVariable everywhere by default.
- As a result tf.keras.layers is in general lower-complexity, with fewer hacks and workarounds. However some of current code is temporary (variable creation should be moved to Checkpointable, arguably, and there are some dependency issues that will require later refactors).
- The legacy tf.layers layers behavior is kept, with references to variable scopes and global collections injected in the subclassed tf.layers.base.Layer class (the content of tf.layers.base.Layer is the complexity differential between the old implementation and the new one).
Note: this refactor does slightly change the behavior of tf.layers.base.Layer, by disabling extreme edge-case behavior that either has long been invalid, or is dangerous and should most definitely be disabled. This will not affect any users since such behaviors only existed in the base Layer unit tests. The behaviors disabled are:
- Option to create reusable variables in `call` (already invalid for some time).
- Option to use a variable scope to create layer variables outside of the layer while not having the layer track such variables locally.
PiperOrigin-RevId: 192339798
|
|
|
|
|
|
|
|
| |
This is in preparation to introduce one public, stable symbol: tf.executing_eagerly()
(i.e., part of moving APIs related to eager execution from "contrib" to a namespace
where we provide API stability guarantees)
PiperOrigin-RevId: 188212646
|
|
|
|
| |
PiperOrigin-RevId: 188059096
|
|
|
|
|
|
|
|
| |
Instead of keeping track of dependencies manually, we rely on the TF graph structure to find dependencies. The resulting implementation is cleaner and more robust.
This does not change any existing behavior. It extends the current behavior by allowing `get_updates_for(inputs)` and `get_losses_for(inputs)` to be called from *any* tensors upstream of the layer, not just the immediate layer's inputs.
PiperOrigin-RevId: 185168680
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change:
(1) wraps Layer's `build` method in an `init_scope`, which in turn makes it
possible to compile the `call` method into a graph function by wrapping
it in `tfe.defun` because the `init_scope` lifts all ops created in
`build` out of function-building graphs;
(2) defers the creation of regularizers, constructing them after `build`
exits and thereby ensuring that they are not created inside an
`init_scope`.
PiperOrigin-RevId: 180954866
|
|
|
|
| |
PiperOrigin-RevId: 179953488
|
|
|
|
|
|
|
| |
Supports only variable regularization losses when executing eagerly. They are
stored as zero-argument lambdas and executed when the property is requested.
PiperOrigin-RevId: 177227550
|
|
|
|
|
|
|
| |
Splits GraphNetwork out into a new file, moves some shared utility functions to
layers.utils. Should have no functional changes.
PiperOrigin-RevId: 175909000
|
|
|
|
|
| |
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/14412 from yifeif:yifeif-patch-3 4b91380c6fc1f995d48a5f184e7307f776541bd0
PiperOrigin-RevId: 175192097
|
|
|
|
|
|
| |
Additionally, fix a bug with handling of activity_regularizer in tf.layers base Layer (and add test).
PiperOrigin-RevId: 175070161
|
|
|
|
|
|
| |
instantiate a Network in eager mode using the regular Keras API, and call it on eager tensors.
PiperOrigin-RevId: 172942569
|
|
|
|
|
|
| |
arbitrary nesting in layer inputs & outputs.
PiperOrigin-RevId: 172040243
|
|
|
|
|
|
|
|
|
|
|
| |
* Make `activity_regularizer` a real read-only property settable by
the constructor.
* Make `name` a read-only property instead of mutable.
* Make `inbound_nodes`, `outbound_nodes`, `batch_input_shape` private.
Also: Update the documentation of Layer to indicate that it is stable,
and include guidance for how to use it.
PiperOrigin-RevId: 170777368
|
|
|
|
|
|
|
|
| |
collections to keep track of ResourceVariables. Instead, they are tracked by the user as normal Python objects. In a subsequent CL, we'll make the lifetime of a variable's underlying resource match the lifetime of the corresponding Python object. For this to happen, there must be no everlasting global Python references to said variables.
More specifically, this change forces the `collections` flag in ResourceVariable's constructor to be None when Eager is enabled. It also raises an error on calls to get_collection() for variable collections.
PiperOrigin-RevId: 168754146
|
|
|
|
| |
PiperOrigin-RevId: 167019943
|
|
|
|
|
|
| |
It is largely a tiny refactor. One addition to the public API: add method `count_params` to base layer class (which allows us to use the `model.summary()` method with models built with core layers).
PiperOrigin-RevId: 165255776
|
|
|
|
|
|
|
|
|
|
| |
implement the full layer API. You can think of a network as a "bigger layer".
- Rename tf.contrib.keras Container as Network
- Add a Network class in tf.layers which implements the part of Container that we want to add to core layers.
- Make Keras Network subclass core Network.
PiperOrigin-RevId: 164202674
|
|
|
|
|
|
|
|
| |
When a layer undergoes a deep copy; any internal object that contains tensors
gets moved over shallowly. Fixes a bug where trained layers whose attributes
contain Tensors break in legacy_seq2seq code.
PiperOrigin-RevId: 161158127
|
|
|
|
|
|
|
|
|
|
|
| |
- dropout
- conv layers
- pooling layers
- batchnorm
Also add to core layers an automated layer input spec check system, allowing to easily specify constraints on inputs acceptable by layers, and raise helpful error messages in case of incompatibility.
PiperOrigin-RevId: 156256513
|
|
|
|
|
|
| |
This allows the layer's call() method to call add_variable, making it much
easier to create variables while building the layer's logic.
Change: 154916035
|
|
|
|
|
| |
API change: for users of custom Keras layers built using `tf.contrib.keras`, the method `add_weight` of the Keras base layer has now a new API (synced with the main Keras GitHub repo).
Change: 154366685
|
|
|
|
|
| |
This wreaks havoc with tracking variables and other internal state and just isn't supported.
Change: 151633894
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As it turns out, the intent of constructing tf.layers._Layer with reuse=True
is to reuse the subscope containing the layer's name. This was broken before the recent change fixing #8504, but that change didn't actually fix the problem (it introduced a different bug!). Here we finish fixing #8504.
The following will now work correctly:
a = Dense(3, name="a")
a.apply(...) # scope of a is "a"
a_reuse = Dense(3, name="a", _reuse=True)
a_reuse.apply(...) # scope of a_reuse is "a"
the errant behavior was that a_reuse scope became just "".
The alternative behavior can still be recreated via:
a_reuse = Dense(..., _reuse=True)
a_reuse.apply(..., scope=tf.get_variable_scope()) # scope of a_reuse is "".
Change: 150947095
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also fixes #8504.
This doesn't affect any existing users (no one is using the OO api yet).
However is necessary for compatibility with RNNCell; and the behavior has
been somewhat ambiguous up until now. Defines a behavior and adds unit tests
to make it clear.
The original behavior of _Layer is to immediately create the variable scope
it will use, in its initializer. However, this doesn't work with _reuse=True,
and may not be the right behavior in general.
With RNNCells' current design the internal scope is set lazily based on the
first __call__ into the layer. This allows one to write:
cell_fw = LSTMCell(...)
cell_bw = LSTMCell(...)
... = tf.nn.dynamic_bidirectional_rnn(cell_fw, cell_bw, ...)
then cell_fw gets the variable scope '.../bidirectional_rnn/fw/lstm_cell' and
cell_bw gets the variable scope '.../bidirectional_rnn/bw/lstm_cell'; thus their
variable names are tied to their function.
Furthermore, this change allows Layers constructed with reuse=True to reuse
the scopes within which they are **called** the first time. Before this,
a new scope was always created, and having reuse=True would just lead to
errors (see, e.g., github issue #8504).
with tf.variable_scope('new_scope'):
ker = tf.get_variable('kernel', [5, 6])
tf.layers.dense(inputs=tf.constant(np.random.randn(3, 4, 5)), units=3, reuse=True)
used to raise:
ValueError: Variable new_scope/dense/kernel does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?
The new (correct) behavior is to reuse the 'kernel' variable in scope 'new_scope'.
I have additionally added an optional argument scope= to the __call__ method
to provide a way to provide a scope upon application. This scope is used
only if the layer has not yet been built and no scope has yet been provided.
Added unit tests to show the behavior.
Change: 150701086
|
|
|
|
|
|
| |
duplication.
Change: 142780852
|
|
|
|
| |
Change: 141786851
|
|
|
|
| |
Change: 141412543
|
|
|
|
| |
Change: 140934746
|
|
|
|
| |
Change: 140643194
|
|
|
|
|
|
| |
in base layer `__call__`.
Change: 140547974
|
|
Change: 140064894
|