aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/python/layers/base_test.py
Commit message (Collapse)AuthorAge
* Add `synchronization` and `aggregation` args to the layer `add_weight()` ↵Gravatar Pavithra Vijay2018-07-09
| | | | | | | | API. These args will be used for distributed variables. Migrate all usages of `tower_local_var_scope` to using the new args. PiperOrigin-RevId: 203855963
* Replace unnecessary `()` in `run_in_graph_and_eager_modes()`.Gravatar Tom Hennigan2018-06-22
| | | | PiperOrigin-RevId: 201652888
* Automated g4 rollback of changelist 200783477Gravatar Reed Wanderman-Milne2018-06-19
| | | | PiperOrigin-RevId: 201204573
* Update a few documentation for layer-input-casting feature.Gravatar James Qin2018-06-19
| | | | PiperOrigin-RevId: 201152785
* Automatic cast layer inputs to the layer's dtype.Gravatar Reed Wanderman-Milne2018-06-15
| | | | | | | | This makes it more convenient to use layer of different dtypes in a model. Instead of having to manually cast intermediate tensors between layers of different dtypes, they will automatically be casted. This is also useful for the upcoming mixed precision API. PiperOrigin-RevId: 200783477
* Merge changes from github.Gravatar Yifei Feng2018-05-24
| | | | | | | Revert #18413. Too many internal test failures due to the name scope change caused by this change. Revert #18192. Cannot use re2::StringPiece internally. Need alternative for set call. Will pull and clean this up in a separate change. PiperOrigin-RevId: 197991247
* Make default weights initializer in `base_layers.Layer` suitable for their ↵Gravatar A. Unique TensorFlower2018-04-12
| | | | | | dtype. PiperOrigin-RevId: 192634133
* Refactor layers:Gravatar Francois Chollet2018-04-10
| | | | | | | | | | | | - tf.layers layers now subclasses tf.keras.layers layers. - tf.keras.layers is now agnostic to variable scopes and global collections (future-proof). It also uses ResourceVariable everywhere by default. - As a result tf.keras.layers is in general lower-complexity, with fewer hacks and workarounds. However some of current code is temporary (variable creation should be moved to Checkpointable, arguably, and there are some dependency issues that will require later refactors). - The legacy tf.layers layers behavior is kept, with references to variable scopes and global collections injected in the subclassed tf.layers.base.Layer class (the content of tf.layers.base.Layer is the complexity differential between the old implementation and the new one). Note: this refactor does slightly change the behavior of tf.layers.base.Layer, by disabling extreme edge-case behavior that either has long been invalid, or is dangerous and should most definitely be disabled. This will not affect any users since such behaviors only existed in the base Layer unit tests. The behaviors disabled are: - Option to create reusable variables in `call` (already invalid for some time). - Option to use a variable scope to create layer variables outside of the layer while not having the layer track such variables locally. PiperOrigin-RevId: 192339798
* eager: Rename in_eager_mode to executing_eagerly and get rid of in_graph_mode.Gravatar Asim Shankar2018-03-07
| | | | | | | | This is in preparation to introduce one public, stable symbol: tf.executing_eagerly() (i.e., part of moving APIs related to eager execution from "contrib" to a namespace where we provide API stability guarantees) PiperOrigin-RevId: 188212646
* Layers bind to a graph when first called, not at __init__.Gravatar A. Unique TensorFlower2018-03-06
| | | | PiperOrigin-RevId: 188059096
* Simplify and extend the management of input-conditional losses and updates.Gravatar Francois Chollet2018-02-09
| | | | | | | | Instead of keeping track of dependencies manually, we rely on the TF graph structure to find dependencies. The resulting implementation is cleaner and more robust. This does not change any existing behavior. It extends the current behavior by allowing `get_updates_for(inputs)` and `get_losses_for(inputs)` to be called from *any* tensors upstream of the layer, not just the immediate layer's inputs. PiperOrigin-RevId: 185168680
* Make it possible to wrap Layer's `call` method in `tfe.defun`.Gravatar Akshay Agrawal2018-01-05
| | | | | | | | | | | | | | This change: (1) wraps Layer's `build` method in an `init_scope`, which in turn makes it possible to compile the `call` method into a graph function by wrapping it in `tfe.defun` because the `init_scope` lifts all ops created in `build` out of function-building graphs; (2) defers the creation of regularizers, constructing them after `build` exits and thereby ensuring that they are not created inside an `init_scope`. PiperOrigin-RevId: 180954866
* Merge changes from github.Gravatar A. Unique TensorFlower2017-12-22
| | | | PiperOrigin-RevId: 179953488
* Support tfe.Network.lossesGravatar Allen Lavoie2017-11-28
| | | | | | | Supports only variable regularization losses when executing eagerly. They are stored as zero-argument lambdas and executed when the property is requested. PiperOrigin-RevId: 177227550
* Rename layers.base.Network -> layers.network.GraphNetworkGravatar Allen Lavoie2017-11-15
| | | | | | | Splits GraphNetwork out into a new file, moves some shared utility functions to layers.utils. Should have no functional changes. PiperOrigin-RevId: 175909000
* Fix typo in tensorflow/python/layers/base_test.pyGravatar Yifei Feng2017-11-10
| | | | | COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/14412 from yifeif:yifeif-patch-3 4b91380c6fc1f995d48a5f184e7307f776541bd0 PiperOrigin-RevId: 175192097
* Update tf.keras RNNs to the Keras 2.0.9 API. Does not include cuDNN layers.Gravatar Francois Chollet2017-11-10
| | | | | | Additionally, fix a bug with handling of activity_regularizer in tf.layers base Layer (and add test). PiperOrigin-RevId: 175070161
* Make Network compatible with eager mode. Currently it only allows to ↵Gravatar Francois Chollet2017-10-20
| | | | | | instantiate a Network in eager mode using the regular Keras API, and call it on eager tensors. PiperOrigin-RevId: 172942569
* Switch to nest.flatten() in tf.layers.Layer to allow dicts andGravatar A. Unique TensorFlower2017-10-12
| | | | | | arbitrary nesting in layer inputs & outputs. PiperOrigin-RevId: 172040243
* Clean up properties of layers.Layer:Gravatar A. Unique TensorFlower2017-10-02
| | | | | | | | | | | * Make `activity_regularizer` a real read-only property settable by the constructor. * Make `name` a read-only property instead of mutable. * Make `inbound_nodes`, `outbound_nodes`, `batch_input_shape` private. Also: Update the documentation of Layer to indicate that it is stable, and include guidance for how to use it. PiperOrigin-RevId: 170777368
* When Eager Execution is enabled, TensorFlow now no longer relies on global ↵Gravatar Ali Yahya2017-09-14
| | | | | | | | collections to keep track of ResourceVariables. Instead, they are tracked by the user as normal Python objects. In a subsequent CL, we'll make the lifetime of a variable's underlying resource match the lifetime of the corresponding Python object. For this to happen, there must be no everlasting global Python references to said variables. More specifically, this change forces the `collections` flag in ResourceVariable's constructor to be None when Eager is enabled. It also raises an error on calls to get_collection() for variable collections. PiperOrigin-RevId: 168754146
* Make core layers EAGER model friendly.Gravatar A. Unique TensorFlower2017-08-30
| | | | PiperOrigin-RevId: 167019943
* Make it possible to use layers from `tf.layers` directly in a Keras model.Gravatar Francois Chollet2017-08-14
| | | | | | It is largely a tiny refactor. One addition to the public API: add method `count_params` to base layer class (which allows us to use the `model.summary()` method with models built with core layers). PiperOrigin-RevId: 165255776
* Add Network class. Networks are directed acyclic graphs of layers, that ↵Gravatar Francois Chollet2017-08-03
| | | | | | | | | | implement the full layer API. You can think of a network as a "bigger layer". - Rename tf.contrib.keras Container as Network - Add a Network class in tf.layers which implements the part of Container that we want to add to core layers. - Make Keras Network subclass core Network. PiperOrigin-RevId: 164202674
* Bugfix for https://github.com/tensorflow/models/issues/1050Gravatar Eugene Brevdo2017-07-06
| | | | | | | | When a layer undergoes a deep copy; any internal object that contains tensors gets moved over shallowly. Fixes a bug where trained layers whose attributes contain Tensors break in legacy_seq2seq code. PiperOrigin-RevId: 161158127
* Refactor Keras layers to rely on core TF layers, specifically:Gravatar Francois Chollet2017-05-16
| | | | | | | | | | | - dropout - conv layers - pooling layers - batchnorm Also add to core layers an automated layer input spec check system, allowing to easily specify constraints on inputs acceptable by layers, and raise helpful error messages in case of incompatibility. PiperOrigin-RevId: 156256513
* [tf layers] Delay marking a layer as built until the end of its first apply().Gravatar Eugene Brevdo2017-05-02
| | | | | | This allows the layer's call() method to call add_variable, making it much easier to create variables while building the layer's logic. Change: 154916035
* Refactor Keras layers to rely on core TF layers.Gravatar Francois Chollet2017-04-26
| | | | | API change: for users of custom Keras layers built using `tf.contrib.keras`, the method `add_weight` of the Keras base layer has now a new API (synced with the main Keras GitHub repo). Change: 154366685
* Ensure tf.layers._Layer objects are not used across multiple graphs.Gravatar Eugene Brevdo2017-03-29
| | | | | This wreaks havoc with tracking variables and other internal state and just isn't supported. Change: 151633894
* Undo a breaking change to the public API; while retaining the bugfix of #8504.Gravatar Eugene Brevdo2017-03-22
| | | | | | | | | | | | | | | | | | | | | As it turns out, the intent of constructing tf.layers._Layer with reuse=True is to reuse the subscope containing the layer's name. This was broken before the recent change fixing #8504, but that change didn't actually fix the problem (it introduced a different bug!). Here we finish fixing #8504. The following will now work correctly: a = Dense(3, name="a") a.apply(...) # scope of a is "a" a_reuse = Dense(3, name="a", _reuse=True) a_reuse.apply(...) # scope of a_reuse is "a" the errant behavior was that a_reuse scope became just "". The alternative behavior can still be recreated via: a_reuse = Dense(..., _reuse=True) a_reuse.apply(..., scope=tf.get_variable_scope()) # scope of a_reuse is "". Change: 150947095
* Change behavior of tf.layers._Layer: scope (if not provided) is set lazily.Gravatar Eugene Brevdo2017-03-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also fixes #8504. This doesn't affect any existing users (no one is using the OO api yet). However is necessary for compatibility with RNNCell; and the behavior has been somewhat ambiguous up until now. Defines a behavior and adds unit tests to make it clear. The original behavior of _Layer is to immediately create the variable scope it will use, in its initializer. However, this doesn't work with _reuse=True, and may not be the right behavior in general. With RNNCells' current design the internal scope is set lazily based on the first __call__ into the layer. This allows one to write: cell_fw = LSTMCell(...) cell_bw = LSTMCell(...) ... = tf.nn.dynamic_bidirectional_rnn(cell_fw, cell_bw, ...) then cell_fw gets the variable scope '.../bidirectional_rnn/fw/lstm_cell' and cell_bw gets the variable scope '.../bidirectional_rnn/bw/lstm_cell'; thus their variable names are tied to their function. Furthermore, this change allows Layers constructed with reuse=True to reuse the scopes within which they are **called** the first time. Before this, a new scope was always created, and having reuse=True would just lead to errors (see, e.g., github issue #8504). with tf.variable_scope('new_scope'): ker = tf.get_variable('kernel', [5, 6]) tf.layers.dense(inputs=tf.constant(np.random.randn(3, 4, 5)), units=3, reuse=True) used to raise: ValueError: Variable new_scope/dense/kernel does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope? The new (correct) behavior is to reuse the 'kernel' variable in scope 'new_scope'. I have additionally added an optional argument scope= to the __call__ method to provide a way to provide a scope upon application. This scope is used only if the layer has not yet been built and no scope has yet been provided. Added unit tests to show the behavior. Change: 150701086
* Allow layers to define variables in call, test that it doesn't lead to ↵Gravatar Lukasz Kaiser2016-12-22
| | | | | | duplication. Change: 142780852
* Changes to zeros_initializer which seem to have been missed in the migration.Gravatar A. Unique TensorFlower2016-12-12
| | | | Change: 141786851
* Refactor Python imports: layersGravatar Justine Tunney2016-12-08
| | | | Change: 141412543
* Update base Layer to be consistent with the rest of TF.Gravatar Sergio Guadarrama2016-12-03
| | | | Change: 140934746
* Correct the names of base-class variables when no default name is given.Gravatar Lukasz Kaiser2016-11-30
| | | | Change: 140643194
* Add Dropout layer, its functional interface, and `training` argument support ↵Gravatar A. Unique TensorFlower2016-11-29
| | | | | | in base layer `__call__`. Change: 140547974
* Introduce the FullyConnected layer class and functional wrapper.Gravatar A. Unique TensorFlower2016-11-23
Change: 140064894