aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/contrib/lite/kernels/bidirectional_sequence_rnn_test.cc
Commit message (Collapse)AuthorAge
* Add support for time-major input in the bidirectional RNN Op.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216419983
* Add the option of merging bidirectional RNN and LSTM outputs into a single ↵Gravatar A. Unique TensorFlower2018-10-03
| | | | | | | | output tensor. This is useful if the output of both directions will be passed to the next layer as a single output, as it avoids adding a concatenation op, which can be expensive on mobile devices where memory movement is relatively expensive. PiperOrigin-RevId: 215616140
* Introduce auxiliary input and allow "cross-linking" in the bidirectional RNN Op.Gravatar A. Unique TensorFlower2018-09-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This introduces a connection between forward and backward cells across subsequent layers when stacking bidirectional RNN Ops on top of each other. In more detail: Previously, the Op had only one input that was fed into the layer in the following way: INPUT (INPUT_REVERSED) | | --------------------- | FW_RNN BW_RNN | <----- bidi-RNN cell (with one input / two outpus) --------------------- | | FW_OUT BW_OUT Now, the Op can have an (optional) auxiliary input in the following way: AUX_INPUT (AUX_INPUT_REVERSED) | | INPUT | (INPUT_R'D.)| | | | | ----------------------- | \ / \ / | | FW_RNN BW_RNN | <----- bidi-RNN cell (with 2 inputs / 2 outpus) ----------------------- | | FW_OUT BW_OUT When stacking these Ops, previously, only the following flow was allowed: Input / \ FW_RNN1 BW_RNN1 | | | | FW_RNN2 BW RNN2 | | | | FW_RNN3 BW_RNN3 \ / Output With the introduction of an auxiliary input to the bidi-RNN layer, the forward (FW_RNNi) output of the ith layer is fed into as the input to the next layer (hence, inputs to both FW_RNN{i+1} and BW_RNN{i+1}) and the backward output is fed as the auxiliary inputs to both FW_RNN{i+1} and BW_RNN{i+1}). This way, the stacking can be changed to allow for the "cross-linking" between subsequent layer in the following way: Input / \ FW_RNN1 BW_RNN1 | \ / | | / \ | FW_RNN2 BW RNN2 | \ / | | / \ | FW_RNN3 BW_RNN3 \ / Output PiperOrigin-RevId: 211401475
* Update bidirectional RNN to support state API.Gravatar A. Unique TensorFlower2018-08-29
| | | | PiperOrigin-RevId: 210719446
* Internal change.Gravatar A. Unique TensorFlower2018-04-26
| | | | PiperOrigin-RevId: 194468535
* Add bidirectional sequence RNN to TFLite Ops.Gravatar A. Unique TensorFlower2018-01-26
PiperOrigin-RevId: 183465032