| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 216419983
|
|
|
|
|
|
|
|
| |
output tensor.
This is useful if the output of both directions will be passed to the next layer as a single output, as it avoids adding a concatenation op, which can be expensive on mobile devices where memory movement is relatively expensive.
PiperOrigin-RevId: 215616140
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces a connection between forward and backward cells across subsequent layers when stacking bidirectional RNN Ops on top of each other.
In more detail:
Previously, the Op had only one input that was fed into the layer in the
following way:
INPUT (INPUT_REVERSED)
| |
---------------------
| FW_RNN BW_RNN | <----- bidi-RNN cell (with one input / two outpus)
---------------------
| |
FW_OUT BW_OUT
Now, the Op can have an (optional) auxiliary input in the following way:
AUX_INPUT (AUX_INPUT_REVERSED)
| |
INPUT | (INPUT_R'D.)|
| | | |
-----------------------
| \ / \ / |
| FW_RNN BW_RNN | <----- bidi-RNN cell (with 2 inputs / 2 outpus)
-----------------------
| |
FW_OUT BW_OUT
When stacking these Ops, previously, only the following flow was allowed:
Input
/ \
FW_RNN1 BW_RNN1
| |
| |
FW_RNN2 BW RNN2
| |
| |
FW_RNN3 BW_RNN3
\ /
Output
With the introduction of an auxiliary input to the bidi-RNN layer, the forward
(FW_RNNi) output of the ith layer is fed into as the input to the next layer
(hence, inputs to both FW_RNN{i+1} and BW_RNN{i+1}) and the backward output is
fed as the auxiliary inputs to both FW_RNN{i+1} and BW_RNN{i+1}). This way, the
stacking can be changed to allow for the "cross-linking" between subsequent
layer in the following way:
Input
/ \
FW_RNN1 BW_RNN1
| \ / |
| / \ |
FW_RNN2 BW RNN2
| \ / |
| / \ |
FW_RNN3 BW_RNN3
\ /
Output
PiperOrigin-RevId: 211401475
|
|
|
|
| |
PiperOrigin-RevId: 210719446
|
|
|
|
| |
PiperOrigin-RevId: 194468535
|
|
PiperOrigin-RevId: 183465032
|