| Commit message (Collapse) | Author | Age |
|\
| |
| |
| | |
PiperOrigin-RevId: 215447391
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|/
|
| |
Added `data_format` to flatten to allow changing of it during inference time.
|
|
|
|
|
|
|
|
|
|
| |
Fixes an issue where losses created while executing eagerly were returned as unevaluated lambdas in a defun.
Lazily evaluates Layer losses by default when possible. Even when graph building this is generally a better thing to do (e.g. losses called in a while_loop).
Allows calls to Layer.add_loss when executing eagerly, but only for losses which are not conditional on inputs (no activity regularizers).
PiperOrigin-RevId: 214947108
|
|
|
|
|
|
|
|
| |
self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about:
* the fact that the session may be reused.
* the session is not closed even when doing a "with self.test_session()" statement.
PiperOrigin-RevId: 209701635
|
|
|
|
|
|
| |
self.test_session() has been deprecated in cl/208545396 as its behavior confuses readers of the test. Moving to self.session() instead.
PiperOrigin-RevId: 209696110
|
|
|
|
| |
PiperOrigin-RevId: 209446309
|
|
|
|
|
|
|
|
|
|
|
| |
This fix tries to address the issue raised in 21525 where
the doc in tf.layers.dense is not correct. Specifically
`outputs = activation(inputs.kernel + bias) Where` ->
`outputs = activation(inputs * kernel + bias) where`
This fix fixes 21525.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Back-ticks are now converted to links in the api_docs generator. With the new docs repo we're moving to simplify the docs pipeline, and make everything more readable.
By doing this we no longer get test failures for symbols that don't exist (`tf.does_not_exist` will not get a link).
There is also no way, not to set custom link text. That's okay.
This is the result of the following regex replacement (+ a couple of manual edits.):
re: @\{([^$].*?)(\$.+?)?}
sub: `\1`
Which does the following replacements:
"@{tf.symbol}" --> "`tf.symbol`"
"@{tf.symbol$link_text}" --> "`tf.symbol`"
PiperOrigin-RevId: 208042358
|
|
|
|
| |
PiperOrigin-RevId: 207623153
|
|
|
|
|
|
|
|
| |
API. These args will be used for distributed variables.
Migrate all usages of `tower_local_var_scope` to using the new args.
PiperOrigin-RevId: 203855963
|
|
|
|
| |
PiperOrigin-RevId: 203039199
|
|
|
|
| |
PiperOrigin-RevId: 201652888
|
|
|
|
| |
PiperOrigin-RevId: 201204573
|
|
|
|
| |
PiperOrigin-RevId: 201152785
|
|
|
|
|
|
|
|
| |
This makes it more convenient to use layer of different dtypes in a model. Instead of having to manually cast intermediate tensors between layers of different dtypes, they will automatically be casted.
This is also useful for the upcoming mixed precision API.
PiperOrigin-RevId: 200783477
|
|
|
|
|
|
|
|
|
|
| |
compatible with mixed precision training apis.
tf.layers.foolayer(inputs) creates a tf.layer.FooLayer(dtype=inputs.dtype) and immediately invokes __call__() on the input.
The dtype in the Foolayer() constructor isn't needed. Plus it stands in the way for global mixed precision dtype we plan to add in the future.
PiperOrigin-RevId: 200524027
|
|
|
|
|
|
|
| |
Revert #18413. Too many internal test failures due to the name scope change caused by this change.
Revert #18192. Cannot use re2::StringPiece internally. Need alternative for set call. Will pull and clean this up in a separate change.
PiperOrigin-RevId: 197991247
|
|
|
|
| |
PiperOrigin-RevId: 197097430
|
|
|
|
|
|
| |
This is in preparation for removing the _USE_C_API toggle altogether.
PiperOrigin-RevId: 196920481
|
|
|
|
|
|
|
| |
Working on untangling TF/Estimator deps. Some core TF code depends on Estimator
by using the fn_args utility function within Estimator.
PiperOrigin-RevId: 196277612
|
|
|
|
|
|
| |
exporting symbols to the TensorFlow API.
PiperOrigin-RevId: 194426855
|
|
|
|
| |
PiperOrigin-RevId: 194274698
|
|
|
|
|
|
| |
dtype.
PiperOrigin-RevId: 192634133
|
|
|
|
|
|
|
|
|
|
|
|
| |
- tf.layers layers now subclasses tf.keras.layers layers.
- tf.keras.layers is now agnostic to variable scopes and global collections (future-proof). It also uses ResourceVariable everywhere by default.
- As a result tf.keras.layers is in general lower-complexity, with fewer hacks and workarounds. However some of current code is temporary (variable creation should be moved to Checkpointable, arguably, and there are some dependency issues that will require later refactors).
- The legacy tf.layers layers behavior is kept, with references to variable scopes and global collections injected in the subclassed tf.layers.base.Layer class (the content of tf.layers.base.Layer is the complexity differential between the old implementation and the new one).
Note: this refactor does slightly change the behavior of tf.layers.base.Layer, by disabling extreme edge-case behavior that either has long been invalid, or is dangerous and should most definitely be disabled. This will not affect any users since such behaviors only existed in the base Layer unit tests. The behaviors disabled are:
- Option to create reusable variables in `call` (already invalid for some time).
- Option to use a variable scope to create layer variables outside of the layer while not having the layer track such variables locally.
PiperOrigin-RevId: 192339798
|
|
|
|
|
|
|
|
| |
model.
Had to modify the __call__ method in the base layer class so that it could work with feature style inputs in which case we lazily convert the inputs to tensors instead of providing tensors as inputs upfront.
PiperOrigin-RevId: 191655445
|
|
|
|
| |
PiperOrigin-RevId: 190976338
|
|
|
|
| |
PiperOrigin-RevId: 190953197
|
|
|
|
| |
PiperOrigin-RevId: 190858242
|
|
|
|
| |
PiperOrigin-RevId: 190835392
|
|
|
|
|
|
|
|
|
|
| |
Makes the tf.keras.Layer.__call__ signature identical to tf.layers.Layer.__call__, but makes passing positional arguments other than "inputs" an error in most cases. The only case it's allowed is subclassed Models which do not have an "inputs" argument to their call() method.
This means subclassed Models no longer need to pass all but the first argument as a keyword argument (or do list packing/unpacking) when call() takes multiple Tensor arguments.
Includes errors for cases where whether an argument indicates an input is ambiguous, but otherwise doesn't do much to support non-"inputs" call() signatures for shape inference or deferred Tensors. The definition of an input/non-input is pretty clear, so that cleanup will mostly be tracking down all of the users of "self.call" and getting them to pass inputs as positional arguments if necessary.
PiperOrigin-RevId: 190787899
|
|
|
|
|
|
| |
specifying an input shape in advance (the model gets built on fit/etc, like subclassed models).
PiperOrigin-RevId: 190161771
|
|
|
|
| |
PiperOrigin-RevId: 189945839
|
|
|
|
| |
PiperOrigin-RevId: 189666053
|
|
|
|
| |
PiperOrigin-RevId: 189404070
|
|
|
|
|
|
| |
Shockingly this one was also due to PySequence_GetItem.
PiperOrigin-RevId: 188765548
|
|
|
|
| |
PiperOrigin-RevId: 188731961
|
|
|
|
| |
PiperOrigin-RevId: 188540944
|
|
|
|
|
|
|
|
| |
This is in preparation to introduce one public, stable symbol: tf.executing_eagerly()
(i.e., part of moving APIs related to eager execution from "contrib" to a namespace
where we provide API stability guarantees)
PiperOrigin-RevId: 188212646
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before:
entry {
name: "EagerLinearRegressionBenchmark.eager_train_cpu"
iters: 2000
wall_time: 2.45178794861
extras {
key: "examples_per_sec"
value {
double_value: 52206.7987456
}
}
}
After:
entry {
name: "EagerLinearRegressionBenchmark.eager_train_cpu"
iters: 2000
wall_time: 1.9873790741
extras {
key: "examples_per_sec"
value {
double_value: 64406.4344182
}
}
}
PiperOrigin-RevId: 188068838
|
|
|
|
| |
PiperOrigin-RevId: 188059096
|
|
|
|
|
|
|
|
|
|
| |
- make it possible to use a model that was compiled with a TF optimizer (do not require a Keras optimizer)
- do not require input to be dict (input_fn supports plain arrays)
- do not require `config` to be a RunConfig instance, can now be a dict (better UX)
- make it possible to use a subclassed model (caveat: weights are not preserved, yet)
- clear error message when model isn't compiled; improve various error messages
PiperOrigin-RevId: 187959927
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(This change is mostly API goldens by volume)
Layers will inherit from CheckpointableBase since they do variable management
themselves. A __setattr__ override would also likely slow down functional layers
significantly.
I believe the plan for Model is to piggyback on its existing __setattr__
override rather than having Model inherit from CheckpointableBase through Layer
and Checkpointable itself.
PiperOrigin-RevId: 187215512
|
|
|
|
|
|
| |
Also moves these methods into their own file in python/framework. This avoids further bloating control_flow_ops.py and makes the BUILD deps easier for a future change I'm working on.
PiperOrigin-RevId: 187055501
|