| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This benchmark creates many intermediates values, so we can make sure there's no performance overhead (it looks like there might be currently, or it might be from some other difference). It also runs in a defun and in legacy graph mode.
Results from my machine:
entry {
name: "CondWithManyIntermediatesBenchmark.benchmark_cond_v1_defun"
iters: 500
wall_time: 1.25822591782
}
entry {
name: "CondWithManyIntermediatesBenchmark.benchmark_cond_v2_defun"
iters: 500
wall_time: 5.99376106262
}
entry {
name: "CondWithManyIntermediatesBenchmark.benchmark_cond_v1_graph"
iters: 500
wall_time: 2.05277585983
}
entry {
name: "CondWithManyIntermediatesBenchmark.benchmark_cond_v2_graph"
iters: 500
wall_time: 2.84808516502
}
Clearly we have some work to do! I haven't looked into the time differences at all yet.
PiperOrigin-RevId: 216202325
|
|
|
|
|
|
|
|
| |
tf.gradients currently returns [NONE] when the gradient of unconnected variables
is required. This backwards compatable change adds in the option to have zero
tensors returned that match the dimensions of the input tensor.
PiperOrigin-RevId: 215725488
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to replace it.
This change prepares `tf.data` for TensorFlow 2.0, where `tf.contrib` will no longer exist. It retains the pre-existing endpoints in `tf.contrib.data` with deprecation warnings.
Note there are some exceptions to the move:
* Deprecated symbols in `tf.contrib.data` have not been moved to `tf.data.experimental`, because replacements already exist.
* `tf.contrib.data.LMDBDataset` has not been moved, because we plan to move it to a SIG-maintained repository.
* `tf.contrib.data.assert_element_shape()` has not yet been moved, because it depends on functionality in `tf.contrib`, and it will move in a later change.
* `tf.contrib.data.AUTOTUNE` has not yet been moved, because we have not yet determined how to `tf_export()` a Python integer.
* The stats-related API endpoints have not yet appeared in a released version of TensorFlow, so these are moved to `tf.data.experimental` without retaining an endpoint in `tf.contrib.data`.
In addition, this change includes some build rule and ApiDef refactoring:
* Some of the "//third_party/tensorflow/python:training" dependencies had to be split in order to avoid a circular dependency.
* The `tf.contrib.stateless` ops now have a private core library for the generated wrappers (and accordingly are hidden in their ApiDef) so that `tf.data.experimental.sample_from_datasets()` can depend on them.
PiperOrigin-RevId: 215304249
|
|
|
|
|
|
|
|
|
|
| |
Add a single test flag for enabling v2 control flow in tests since we do not plan to support v2 ops with legacy control flow.
We have 2 test decorators now:
@with_control_flow_v2: Enables all tests in a class to run with v2 control flow.
@disable_control_flow_v2: Disables a test function from running in v2. I have removed the skiptests to avoid setup/teardown overheads.
Enable tests in control_flow_ops_py_test that run with control_flow_v2.
PiperOrigin-RevId: 214980108
|
|
|
|
|
|
|
|
| |
NOTE: All ops and kernels previously previously defined in
tensorflow/contrib/data have had their name prefixed with
"Experimental" to indicate that they are not (yet) stable, and thus
not subject to backwards or forwards compatibility guarantees.
PiperOrigin-RevId: 214940819
|
|
|
|
| |
PiperOrigin-RevId: 214587760
|
|
|
|
| |
PiperOrigin-RevId: 213875284
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
standard python `print` method, and deprecates the old `tf.Print` operator (to be removed in in v2.0).
It follows the design doc specified in https://github.com/tensorflow/community/pull/14 and additionally incorporates the community feedback and design review decisions.
This CL adds two new internal graph operators: a StringFormat operator that formats a template string with a list of input tensors to insert into the string and outputs a string scalar containing the result, and a PrintV2 operator that prints a string scalar to a specified output stream or logging level.
The formatting op is exposed at `tf.strings.Format`. A new python method is exposed at `tf.print` that takes a list of inputs that may be nested structures and may contain tensors, formats them nicely using the formatting op, and returns a PrintV2 operator that prints them. In Eager mode and inside defuns this PrintV2 operator will automatically be executed, but in graph mode it will need to be either added to `sess.run`, or used as a control dependency for other operators being executed.
As compared to the previous print function, the new print function:
- Has an API that more closely aligns with the standard python3 print
- Supports changing the print logging level/output stream
- allows printing arbitrary (optionally nested) data structures as opposed to just flat lists of tensors
- support printing sparse tensors
- changes printed tensor format to show more meaningful summary (recursively print the first and last elements of each tensor dimension, instead of just the first few elements of the tensor irregardless of dimension).
PiperOrigin-RevId: 213709924
|
|
|
|
|
|
|
|
| |
Supports single and double derivatives but does not supporting nesting yet.
https://github.com/tensorflow/community/pull/13
PiperOrigin-RevId: 213565971
|
|
|
|
| |
PiperOrigin-RevId: 213372241
|
|
|
|
| |
PiperOrigin-RevId: 212645190
|
|
|
|
|
|
| |
then bazel will build TensorFlow API version 2.0. In all other cases, it would build API version 1.*.
PiperOrigin-RevId: 212016666
|
|
|
|
|
|
|
|
| |
TF 2.0 to return a no-arg callable to output a learning rate, instead of directly outputting a learning rate tensor.
This brings the graph mode API in line with the eager execution API, where this change was made to allow changing the learning rate value across different invocations of optimizer functions.
PiperOrigin-RevId: 211726295
|
|
|
|
|
|
|
|
| |
This modifies
https://github.com/tensorflow/tensorflow/commit/834da2c3fddab1bbbce742db572cfe65dd320fcd
to work with tfe.defun in addition to the legacy Defun implementation.
PiperOrigin-RevId: 211663702
|
|
|
|
| |
PiperOrigin-RevId: 210439649
|
|
|
|
|
|
|
|
| |
Subsequent calls after a successful call to enable eager execution are a no op.
This is mostly to support colab where I might accidentally re-execute this and
have to clear a large stack trace / split my import cell.
PiperOrigin-RevId: 210344960
|
|
|
|
|
|
| |
This is necessary to run multi-worker MirroredStrategy and CollectiveAllReduceStrategy with estimator.
PiperOrigin-RevId: 210192378
|
|
|
|
| |
PiperOrigin-RevId: 210107257
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This requires a few changes:
- Make func_graph_from_py_func public (not part of the official public API though)
- Make function_def_to_graph return the new FuncGraph implementation.
- Disables some cond_v2 tests until we get them working with the new FuncGraph implementation.
- Add outer_graph field to FuncGraph.
- Add external_captures and internal_captures properties to FuncGraph for readability.
- Remove extra_inputs/extra_args terminology from cond_v2_impl for readability.
- Use compat.as_str() around Graph._functions keys. In Python 3, we were somehow getting a mix of str and bytes objects.
PiperOrigin-RevId: 210015940
|
|
|
|
| |
PiperOrigin-RevId: 209986849
|
|
|
|
| |
PiperOrigin-RevId: 209683367
|
|
|
|
| |
PiperOrigin-RevId: 209637025
|
|\
| |
| |
| | |
PiperOrigin-RevId: 209623532
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This operation computes:
ref[i_1, ..., i_n, indices[i_1, ..., i_n, j]] = updates[i_1, ..., i_n, j]
That is, it assumes that `ref`, `indices` and `updates` have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices.
PiperOrigin-RevId: 209566652
|
| |
| |
| |
| | |
PiperOrigin-RevId: 209299599
|
| |
| |
| |
| |
| |
| | |
Add session_creator and a couple properties to worker context which then are used to configure monitored sessions.
PiperOrigin-RevId: 209026599
|
| |\
| |/
|/| |
|
| |
| |
| |
| | |
PiperOrigin-RevId: 208695032
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
with few dependencies. This allows us to import this in some places without creating circular dependencies as the original file imported many things.
2. Move the stack used in distribution strategy context to the graph. This allows us to use different strategies in different graphs (for e.g. in train and eval).
This fixes #21412 and #21180.
PiperOrigin-RevId: 208680454
|
| |
| |
| |
| | |
PiperOrigin-RevId: 208503071
|
| |\
| |/
|/| |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
than relying on the caller to reshape to rank 2.
Guard the Python library code that reshapes softmax inputs to rank 1 with a forward compatibility check; after the forward compatibility window expires, the Python code will no longer reshape to rank 1.
PiperOrigin-RevId: 207606326
|
| |\
| |/
|/| |
|
| |
| |
| |
| | |
PiperOrigin-RevId: 207215039
|
| |
| |
| |
| |
| |
| | |
Pure refactor, in preparation for adding a higher level checkpoint management utility. This utility will also need to work with the Checkpoint proto, and globbing it on to saver.py seems dirty.
PiperOrigin-RevId: 207179646
|
| |
| |
| |
| |
| |
| | |
ops.py. This change does not depend on the new config.experimental.client_handles_error_formatting flag. I also attempted to modify relevant interpolated error strings so an uninterpolated error message still read correclty if you removed the interpolation tokens.
PiperOrigin-RevId: 207075862
|
| |
| |
| |
| |
| |
| | |
Use object-based save/restore to make dataset/iterator checkpointable in both graph as well as eager mode.
PiperOrigin-RevId: 206998349
|
| |\
| |/
|/|
| | |
based on PR review comments.
|
| |
| |
| |
| | |
PiperOrigin-RevId: 206166233
|
| |\
| |/
|/| |
|
| |
| |
| |
| | |
* Added nGraph bridge as a third_party to be built with TensorFlow based on user selection.
* Added a limited set of C++ unit tests to verify the correctness of the computation
|
| |
| |
| |
| | |
PiperOrigin-RevId: 205875586
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Analagous to the existing support for custom collections.Sequence types. They need to be constructable with the same arguments as the base type for pack_sequence_as to work.
Leaves PyDict_* calls for dict subclasses, but adds more general (and likely much slower) fallbacks for instances of collections.Mapping which are not dict subclasses.
My hope is that this support will be enough so I can use a wrapper around dicts which does not inherit from dict in __setattr__ tracking (some tests failed without it). Inheriting from dict and properly shadowing a real dict seems impossible with CPython (since to shadow without synchronization issues, the wrapper needs to respond to updates to the original dict, but to work with e.g. {}.update(dict_subclass) the wrapper's C storage needs to also be updated).
PiperOrigin-RevId: 205858082
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This change makes Estimator.train() support v2 summaries (tf.contrib.summary.*) out-of-the-box, to match the support for v1 summaries. Estimator.train() will now handle the boilerplate necessary to initialize a file writer and enable summary writing every N steps, and will ensure that its own automatically exported summaries (for loss and global_step/sec) get written to the same underlying events file.
As part of this change, tf.train.SummarySaverHook, tf.train.CheckpointSaverHook, tf.train.StepCounterHook, and tf.train.ProfilerHook have also been adapted to write summaries using the v2 summary system (via a compatibility layer), instead of using FileWriterCache.
A couple additional smaller changes are:
- the 'session' parameter to FileWriter() can now be a callable returning a tf.Session instance.
- the introduction of tf.contrib.summary.record_summaries_if() which takes a boolean tensor for direct control of tf.contrib.summary.should_record_summaries().
- EstimatorSpec.train_op, besides a tf.Operation, is now allowed to be any Tensor-equivalent object rather than just a tf.Tensor.
PiperOrigin-RevId: 205843986
|
| |
| |
| |
| |
| |
| | |
This is part of the work to make available kernels easier to query at runtime.
PiperOrigin-RevId: 205802663
|
| |
| |
| |
| | |
PiperOrigin-RevId: 205679162
|
| |
| |
| |
| |
| |
| |
| |
| | |
Previously session destruction was delayed until the destructor for the Python session object. If the session ends up requiring the Python cycle collector for deallocation, it could end up persisting for a long, non-deterministic period. This can tie up resources and lead to out of memory issues.
This change introduces a SessionRef which causes session.close() to block until all outstanding run operations are finished and tears down the underlying session.
PiperOrigin-RevId: 205670577
|
| |
| |
| |
| |
| |
| | |
Support nested cond_v2s.
PiperOrigin-RevId: 205356562
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
information support to error interpolation.
This CL add a new private property on ops: Operation._colocation_dict. This property will return a dictionary for which the keys are nodes with which this Operation is colocated, and for which the values are traceable_stack.TraceableObject instances. The TraceableObject instances record the location of the relevant colocation context manager but have the "obj" field set to None to prevent leaking private data.
For example, suppose file_a contained these lines:
file_a.py:
14: node_a = tf.constant(3, name='NODE_A')
15: with tf.colocate_with(node_a):
16: node_b = tf.constant(4, name='NODE_B')
Then a TraceableObject t_obj representing the colocation context manager would have these member values:
t_obj.obj -> None
t_obj.name = 'NODE_A'
t_obj.filename = 'file_a.py'
t_obj.lineno = 15
and node_b.op._colocation_dict would return the dictionary
{ 'NODE_A': t_obj }
PiperOrigin-RevId: 205035378
|