aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/python/layers
Commit message (Collapse)AuthorAge
* Merge pull request #17672 from joeyearsley:patch-3Gravatar TensorFlower Gardener2018-10-02
|\ | | | | | | PiperOrigin-RevId: 215447391
| * Updated ordering for kwargsGravatar joe yearsley2018-10-02
| |
| * Fixed TestsGravatar josephyearsley2018-09-29
| |
| * Fixed Pylint IssuesGravatar josephyearsley2018-09-29
| |
| * Extended to N-dimsGravatar josephyearsley2018-09-29
| |
| * pylint complianceGravatar josephyearsley2018-09-29
| |
| * added dtype to testGravatar josephyearsley2018-09-29
| |
| * Added Flatten TestGravatar josephyearsley2018-09-29
| |
| * Update core.pyGravatar Joe Yearsley2018-09-29
|/ | | Added `data_format` to flatten to allow changing of it during inference time.
* Simplify eager/graph Layer.losses conditionalsGravatar Allen Lavoie2018-09-28
| | | | | | | | | | Fixes an issue where losses created while executing eagerly were returned as unevaluated lambdas in a defun. Lazily evaluates Layer losses by default when possible. Even when graph building this is generally a better thing to do (e.g. losses called in a while_loop). Allows calls to Layer.add_loss when executing eagerly, but only for losses which are not conditional on inputs (no activity regularizers). PiperOrigin-RevId: 214947108
* Move from deprecated self.test_session() to self.cached_session().Gravatar A. Unique TensorFlower2018-08-21
| | | | | | | | self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about: * the fact that the session may be reused. * the session is not closed even when doing a "with self.test_session()" statement. PiperOrigin-RevId: 209701635
* Move from deprecated self.test_session() to self.session() when a graph is set.Gravatar A. Unique TensorFlower2018-08-21
| | | | | | self.test_session() has been deprecated in cl/208545396 as its behavior confuses readers of the test. Moving to self.session() instead. PiperOrigin-RevId: 209696110
* initializer might be a tensor so do not try to convert it to a booleanGravatar Alexandre Passos2018-08-20
| | | | PiperOrigin-RevId: 209446309
* Fix incorrect doc in tf.layers.denseGravatar Yong Tang2018-08-10
| | | | | | | | | | | This fix tries to address the issue raised in 21525 where the doc in tf.layers.dense is not correct. Specifically `outputs = activation(inputs.kernel + bias) Where` -> `outputs = activation(inputs * kernel + bias) where` This fix fixes 21525. Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Remove usage of magic-api-link syntax from source files.Gravatar Mark Daoust2018-08-09
| | | | | | | | | | | | | | | | | | | | Back-ticks are now converted to links in the api_docs generator. With the new docs repo we're moving to simplify the docs pipeline, and make everything more readable. By doing this we no longer get test failures for symbols that don't exist (`tf.does_not_exist` will not get a link). There is also no way, not to set custom link text. That's okay. This is the result of the following regex replacement (+ a couple of manual edits.): re: @\{([^$].*?)(\$.+?)?} sub: `\1` Which does the following replacements: "@{tf.symbol}" --> "`tf.symbol`" "@{tf.symbol$link_text}" --> "`tf.symbol`" PiperOrigin-RevId: 208042358
* Internal cleanupGravatar Scott Zhu2018-08-06
| | | | PiperOrigin-RevId: 207623153
* Add `synchronization` and `aggregation` args to the layer `add_weight()` ↵Gravatar Pavithra Vijay2018-07-09
| | | | | | | | API. These args will be used for distributed variables. Migrate all usages of `tower_local_var_scope` to using the new args. PiperOrigin-RevId: 203855963
* batch_norm: Whether to use batch normalization after each hidden layer.Gravatar A. Unique TensorFlower2018-07-02
| | | | PiperOrigin-RevId: 203039199
* Replace unnecessary `()` in `run_in_graph_and_eager_modes()`.Gravatar Tom Hennigan2018-06-22
| | | | PiperOrigin-RevId: 201652888
* Automated g4 rollback of changelist 200783477Gravatar Reed Wanderman-Milne2018-06-19
| | | | PiperOrigin-RevId: 201204573
* Update a few documentation for layer-input-casting feature.Gravatar James Qin2018-06-19
| | | | PiperOrigin-RevId: 201152785
* Automatic cast layer inputs to the layer's dtype.Gravatar Reed Wanderman-Milne2018-06-15
| | | | | | | | This makes it more convenient to use layer of different dtypes in a model. Instead of having to manually cast intermediate tensors between layers of different dtypes, they will automatically be casted. This is also useful for the upcoming mixed precision API. PiperOrigin-RevId: 200783477
* Remove hardcoded dtype in tf.layers.xxx() function call to make them ↵Gravatar James Qin2018-06-14
| | | | | | | | | | compatible with mixed precision training apis. tf.layers.foolayer(inputs) creates a tf.layer.FooLayer(dtype=inputs.dtype) and immediately invokes __call__() on the input. The dtype in the Foolayer() constructor isn't needed. Plus it stands in the way for global mixed precision dtype we plan to add in the future. PiperOrigin-RevId: 200524027
* Merge changes from github.Gravatar Yifei Feng2018-05-24
| | | | | | | Revert #18413. Too many internal test failures due to the name scope change caused by this change. Revert #18192. Cannot use re2::StringPiece internally. Need alternative for set call. Will pull and clean this up in a separate change. PiperOrigin-RevId: 197991247
* Move Keras code out of _impl folder and remove API files.Gravatar Pavithra Vijay2018-05-17
| | | | PiperOrigin-RevId: 197097430
* Remove _USE_C_API staging in tests now that the C API is enabled by default.Gravatar Skye Wanderman-Milne2018-05-16
| | | | | | This is in preparation for removing the _USE_C_API toggle altogether. PiperOrigin-RevId: 196920481
* Move fn_args utility into core TensorFlow from Estimator.Gravatar Michael Case2018-05-11
| | | | | | | Working on untangling TF/Estimator deps. Some core TF code depends on Estimator by using the fn_args utility function within Estimator. PiperOrigin-RevId: 196277612
* Removing @@ comments from core TensorFlow. They are no longer needed for ↵Gravatar Anna R2018-04-26
| | | | | | exporting symbols to the TensorFlow API. PiperOrigin-RevId: 194426855
* Removing remove_undocumented calls from tensorflow/python.Gravatar Anna R2018-04-25
| | | | PiperOrigin-RevId: 194274698
* Make default weights initializer in `base_layers.Layer` suitable for their ↵Gravatar A. Unique TensorFlower2018-04-12
| | | | | | dtype. PiperOrigin-RevId: 192634133
* Refactor layers:Gravatar Francois Chollet2018-04-10
| | | | | | | | | | | | - tf.layers layers now subclasses tf.keras.layers layers. - tf.keras.layers is now agnostic to variable scopes and global collections (future-proof). It also uses ResourceVariable everywhere by default. - As a result tf.keras.layers is in general lower-complexity, with fewer hacks and workarounds. However some of current code is temporary (variable creation should be moved to Checkpointable, arguably, and there are some dependency issues that will require later refactors). - The legacy tf.layers layers behavior is kept, with references to variable scopes and global collections injected in the subclassed tf.layers.base.Layer class (the content of tf.layers.base.Layer is the complexity differential between the old implementation and the new one). Note: this refactor does slightly change the behavior of tf.layers.base.Layer, by disabling extreme edge-case behavior that either has long been invalid, or is dangerous and should most definitely be disabled. This will not affect any users since such behaviors only existed in the base Layer unit tests. The behaviors disabled are: - Option to create reusable variables in `call` (already invalid for some time). - Option to use a variable scope to create layer variables outside of the layer while not having the layer track such variables locally. PiperOrigin-RevId: 192339798
* Creates a LinearModel (inherits from keras.training.Model) that creates a linearGravatar Rohan Jain2018-04-04
| | | | | | | | model. Had to modify the __call__ method in the base layer class so that it could work with feature style inputs in which case we lazily convert the inputs to tensors instead of providing tensors as inputs upfront. PiperOrigin-RevId: 191655445
* Internal change.Gravatar Yuefeng Zhou2018-03-29
| | | | PiperOrigin-RevId: 190976338
* Automated g4 rollback of changelist 190858242Gravatar Jianwei Xie2018-03-29
| | | | PiperOrigin-RevId: 190953197
* Automated g4 rollback of changelist 190835392Gravatar Anna R2018-03-28
| | | | PiperOrigin-RevId: 190858242
* Merge changes from github.Gravatar Jianwei Xie2018-03-28
| | | | PiperOrigin-RevId: 190835392
* Allow positional arguments in tf.keras.Model subclassesGravatar Allen Lavoie2018-03-28
| | | | | | | | | | Makes the tf.keras.Layer.__call__ signature identical to tf.layers.Layer.__call__, but makes passing positional arguments other than "inputs" an error in most cases. The only case it's allowed is subclassed Models which do not have an "inputs" argument to their call() method. This means subclassed Models no longer need to pass all but the first argument as a keyword argument (or do list packing/unpacking) when call() takes multiple Tensor arguments. Includes errors for cases where whether an argument indicates an input is ambiguous, but otherwise doesn't do much to support non-"inputs" call() signatures for shape inference or deferred Tensors. The definition of an input/non-input is pretty clear, so that cleanup will mostly be tracking down all of the users of "self.call" and getting them to pass inputs as positional arguments if necessary. PiperOrigin-RevId: 190787899
* Refactor keras.Sequential and enable building Sequential models without ↵Gravatar Francois Chollet2018-03-22
| | | | | | specifying an input shape in advance (the model gets built on fit/etc, like subclassed models). PiperOrigin-RevId: 190161771
* Merge changes from github.Gravatar Jacques Pienaar2018-03-21
| | | | PiperOrigin-RevId: 189945839
* Fix test failureGravatar Yuefeng Zhou2018-03-19
| | | | PiperOrigin-RevId: 189666053
* Consolidate all moving_average updates in batchnorm into one implementation.Gravatar Yuefeng Zhou2018-03-16
| | | | PiperOrigin-RevId: 189404070
* Fix another eager PyObject leakGravatar Allen Lavoie2018-03-12
| | | | | | Shockingly this one was also due to PySequence_GetItem. PiperOrigin-RevId: 188765548
* Plug a few more PyObject leaks, test for them.Gravatar Allen Lavoie2018-03-12
| | | | PiperOrigin-RevId: 188731961
* Eager: Fix a Dimension PyObject leak, test for it.Gravatar Allen Lavoie2018-03-09
| | | | PiperOrigin-RevId: 188540944
* eager: Rename in_eager_mode to executing_eagerly and get rid of in_graph_mode.Gravatar Asim Shankar2018-03-07
| | | | | | | | This is in preparation to introduce one public, stable symbol: tf.executing_eagerly() (i.e., part of moving APIs related to eager execution from "contrib" to a namespace where we provide API stability guarantees) PiperOrigin-RevId: 188212646
* Improvement to eager linear regression benchmarkGravatar Akshay Modi2018-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before: entry { name: "EagerLinearRegressionBenchmark.eager_train_cpu" iters: 2000 wall_time: 2.45178794861 extras { key: "examples_per_sec" value { double_value: 52206.7987456 } } } After: entry { name: "EagerLinearRegressionBenchmark.eager_train_cpu" iters: 2000 wall_time: 1.9873790741 extras { key: "examples_per_sec" value { double_value: 64406.4344182 } } } PiperOrigin-RevId: 188068838
* Layers bind to a graph when first called, not at __init__.Gravatar A. Unique TensorFlower2018-03-06
| | | | PiperOrigin-RevId: 188059096
* Fixes a number of usability issues with model_to_estimator, in particular:Gravatar Francois Chollet2018-03-05
| | | | | | | | | | - make it possible to use a model that was compiled with a TF optimizer (do not require a Keras optimizer) - do not require input to be dict (input_fn supports plain arrays) - do not require `config` to be a RunConfig instance, can now be a dict (better UX) - make it possible to use a subclassed model (caveat: weights are not preserved, yet) - clear error message when model isn't compiled; improve various error messages PiperOrigin-RevId: 187959927
* Make Layers CheckpointableGravatar Allen Lavoie2018-02-27
| | | | | | | | | | | | | | (This change is mostly API goldens by volume) Layers will inherit from CheckpointableBase since they do variable management themselves. A __setattr__ override would also likely slow down functional layers significantly. I believe the plan for Model is to piggyback on its existing __setattr__ override rather than having Model inherit from CheckpointableBase through Layer and Checkpointable itself. PiperOrigin-RevId: 187215512
* Actually expose smart_cond and smart_constant_value in tf.contrib.frameworkGravatar Skye Wanderman-Milne2018-02-26
| | | | | | Also moves these methods into their own file in python/framework. This avoids further bloating control_flow_ops.py and makes the BUILD deps easier for a future change I'm working on. PiperOrigin-RevId: 187055501