| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
| |
when the if then/else body of If or the While body funcs do not have stateful
ops.
The are lowered to the same XLA ops.
One use case is in the S4TF compiler: https://github.com/apple/swift/pull/18509
PiperOrigin-RevId: 207977126
|
|
|
|
|
|
| |
suggestion from apassos@ -- the underlying lib->Instantiate() does the caching.
PiperOrigin-RevId: 206993242
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when we only run that If op for its side effects (e.g. enqueuing tensors.)
Also extended the kernel impl to handle the case where the kernel works with
multiple functional libs through its lifetime (b/37549631). The code is modeled
after the WhileOp kernel impl.
An example graph function that runs If for its side-effects is:
function {
signature {
name: "S12control_flow23testTensorEnqueueInCondyySb_SftF.tf_CPU.device_partition"
input_arg {
name: "arg_0"
type: DT_FLOAT # DT_BOOL
}
}
node_def {
name: "op/testTensorEnqueueInCond.14.14"
op: "Const"
device: "/device:CPU:0"
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "value"
value {
tensor {
dtype: DT_FLOAT
tensor_shape {
}
float_val: 1
}
}
}
}
node_def {
name: "op/testTensorEnqueueInCond_5.22.3"
op: "If"
input: "arg_0"
input: "op/testTensorEnqueueInCond.14.14:output:0"
attr {
key: "Tcond"
value {
type: DT_FLOAT # DT_BOOL
}
}
attr {
key: "Tin"
value {
list {
type: DT_FLOAT
}
}
}
attr {
key: "Tout"
value {
list {
}
}
}
attr {
key: "else_branch"
value {
func {
name: "false/testTensorEnqueueInCond_4.22.3"
}
}
}
attr {
key: "then_branch"
value {
func {
name: "true/testTensorEnqueueInCond_3.22.3"
}
}
}
}
}
PiperOrigin-RevId: 206983563
|
|
|
|
|
|
| |
Support nested cond_v2s.
PiperOrigin-RevId: 205356562
|
|
|
|
| |
PiperOrigin-RevId: 199702086
|
|
|
|
| |
PiperOrigin-RevId: 199674121
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
new_cond is a new implementation of tf.cond. Instead of emitting
control flow ops (i.e. Switch and Merge nodes), new_cond emits a
single If op, which represents the conditional branches as TF
functions.
With this change, users can use new_cond and take its gradient.
The idea is for new_cond to eventually replace tf.cond. There are
several functional and performance gaps that must be addressed first,
including:
* Gradients won't work on imported graphs
* Misc. limitations of TF functions (lack of collections, device scopes, etc.)
PiperOrigin-RevId: 199346735
|
|
|
|
|
|
| |
When executing on GPU, synchronously copy cond result from device to host.
PiperOrigin-RevId: 196580820
|
|
|
|
|
|
|
| |
We keep _If and _While. This moves the tests and python generators.
The operators are not part of the public tensorflow API.
PiperOrigin-RevId: 191344237
|
|
PiperOrigin-RevId: 185042663
|