aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/contrib/lite/kernels/internal
Commit message (Collapse)AuthorAge
* Remove Dims from types.h, create build structure.Gravatar A. Unique TensorFlower2018-10-08
| | | | PiperOrigin-RevId: 216191084
* Relax some unnecessary 4D array restrictionsGravatar A. Unique TensorFlower2018-10-05
| | | | PiperOrigin-RevId: 215910400
* Merge the different LSTM EvalFloat/EvalHybrid calls into a single file.Gravatar A. Unique TensorFlower2018-10-05
| | | | PiperOrigin-RevId: 215870962
* Fix quantization util test to pass with defined behavior on 32-bit ↵Gravatar Alan Chiao2018-10-04
| | | | | | architectures. PiperOrigin-RevId: 215757844
* Experimental interpreter, kernels, and example running TensorFlow Lite on a ↵Gravatar Pete Warden2018-10-04
| | | | | | microcontroller PiperOrigin-RevId: 215748973
* Kernel signature reworking, remove Dims from tensor functions.Gravatar A. Unique TensorFlower2018-09-27
| | | | PiperOrigin-RevId: 214883775
* Move obsolete kernel code to legacy files.Gravatar A. Unique TensorFlower2018-09-27
| | | | PiperOrigin-RevId: 214879388
* Migrate a few conv kernels to use new kernel signatures.Gravatar A. Unique TensorFlower2018-09-27
| | | | PiperOrigin-RevId: 214831837
* Update kernel evals to use new kernel signatures.Gravatar A. Unique TensorFlower2018-09-27
| | | | PiperOrigin-RevId: 214767788
* Update kernel evals to use new kernel signatures.Gravatar A. Unique TensorFlower2018-09-27
| | | | PiperOrigin-RevId: 214763814
* Kernel signature reworking, update kernel DepthConcatenation.Gravatar A. Unique TensorFlower2018-09-26
| | | | PiperOrigin-RevId: 214668695
* Kernel signature reworking, misc kernel improvements and migrations.Gravatar A. Unique TensorFlower2018-09-26
| | | | PiperOrigin-RevId: 214661332
* Add checks for dilation_rate.Gravatar Suharsh Sivakumar2018-09-26
| | | | PiperOrigin-RevId: 214627202
* Update kernel evals to use new kernel signatures.Gravatar A. Unique TensorFlower2018-09-24
| | | | PiperOrigin-RevId: 214384090
* Update kernel evals to use new kernel signatures.Gravatar A. Unique TensorFlower2018-09-24
| | | | PiperOrigin-RevId: 214377809
* Portability preparation for more cross-platform prototyping.Gravatar Pete Warden2018-09-24
| | | | PiperOrigin-RevId: 214346240
* Make 8bit reduce sum op handler rescalingGravatar A. Unique TensorFlower2018-09-21
| | | | PiperOrigin-RevId: 214062241
* Kernel signature reworking, misc fixes.Gravatar A. Unique TensorFlower2018-09-21
| | | | PiperOrigin-RevId: 214004752
* Added ABSL_DEPRECATED annotations to various deprecated TensorFlow functions.Gravatar A. Unique TensorFlower2018-09-19
| | | | PiperOrigin-RevId: 213693027
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-19
| | | | PiperOrigin-RevId: 213673402
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-19
| | | | PiperOrigin-RevId: 213651158
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-19
| | | | PiperOrigin-RevId: 213640434
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-18
| | | | PiperOrigin-RevId: 213536334
* Fix unused variable error on powerpc.Gravatar Suharsh Sivakumar2018-09-17
| | | | PiperOrigin-RevId: 213386145
* Add generic fallback optimized implementations for dilated DepthwiseConv.Gravatar Suharsh Sivakumar2018-09-17
| | | | PiperOrigin-RevId: 213350122
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-17
| | | | PiperOrigin-RevId: 213316034
* Numerics tweak to symmetric quantization.Gravatar Alan Chiao2018-09-17
| | | | PiperOrigin-RevId: 213314024
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-17
| | | | PiperOrigin-RevId: 213281730
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-17
| | | | PiperOrigin-RevId: 213275003
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-14
| | | | PiperOrigin-RevId: 213037039
* Updates to parameters, and to kernel helper functions.Gravatar A. Unique TensorFlower2018-09-14
| | | | PiperOrigin-RevId: 213023245
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-14
| | | | PiperOrigin-RevId: 213012717
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-14
| | | | PiperOrigin-RevId: 213007905
* Dilated Depthwise Conv reference implementations.Gravatar Suharsh Sivakumar2018-09-13
| | | | PiperOrigin-RevId: 212884951
* Clean ups related to runtime shapes refactoring.Gravatar A. Unique TensorFlower2018-09-13
| | | | PiperOrigin-RevId: 212861571
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-13
| | | | PiperOrigin-RevId: 212834379
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-13
| | | | PiperOrigin-RevId: 212826308
* Fix convolution bug when input and filter dimensions matchGravatar Jared Duke2018-09-12
| | | | | | | | | | | | | TFLite has an optimized matmul path for cases where the input and filter tensors have matching width+height. However, this case doesn't properly account for multiple *batches*. Account for this and add an appropriate test. Credit to zgxnet for the bug and proposed fix. Fixes #21817 PiperOrigin-RevId: 212645329
* Fixing broadcast pow.Gravatar A. Unique TensorFlower2018-09-11
| | | | PiperOrigin-RevId: 212521825
* Modularize TF Lite interface definitions and reorganize file structureGravatar Pete Warden2018-09-07
| | | | PiperOrigin-RevId: 212064501
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-06
| | | | PiperOrigin-RevId: 211874785
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-05
| | | | PiperOrigin-RevId: 211722113
* Introduce auxiliary input and allow "cross-linking" in the bidirectional ↵Gravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LSTM Op. This introduces a connection between forward and backward cells across subsequent layers when stacking bidirectional LSTM Ops on top of each other. In more detail: Previously, the Op had only one input that was fed into the layer in the following way: INPUT (INPUT_REVERSED) | | ----------------------- | FW_LSTM BW_LSTM | <----- bidi-LSTM cell (with one input / two outputs) ----------------------- | | FW_OUT BW_OUT Now, the Op can have an (optional) auxiliary input in the following way: AUX_INPUT (AUX_INPUT_REVERSED) | | INPUT | (INPUT_R'D.)| | | | | ------------------------- | \ / \ / | | FW_LSTM BW_LSTM | <----- bidi-LSMT cell (with 2 inputs / 2 outputs) ------------------------- | | FW_OUT BW_OUT When stacking these Ops, previously, only the following flow was allowed: Input / \ FW_LSTM1 BW_LSTM1 | | | | FW_LSTM2 BW_LSTM2 | | | | FW_LSTM3 BW_LSTM3 \ / Output With the introduction of an auxiliary input to the bidi-LSTM layer, the forward (FW_LSTMi) output of the ith layer is fed into as the input to the next layer (hence, inputs to both FW_LSTM{i+1} and BW_LSTM{i+1}) and the backward output is fed as the auxiliary inputs to both FW_LSTM{i+1} and BW_LSTM{i+1}). This way, the stacking can be changed to allow for the "cross-linking" between subsequent layer in the following way: Input / \ FW_LSTM1 BW_LSTM1 | \ / | | / \ | FW_LSTM2 BW_LSTM2 | \ / | | / \ | FW_LSTM3 BW_LSTM3 \ / Output PiperOrigin-RevId: 211659472
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-05
| | | | PiperOrigin-RevId: 211633744
* Replace floating point functionality with integer alternative for ↵Gravatar Pete Warden2018-09-04
| | | | | | microcontrollers PiperOrigin-RevId: 211543125
* Create layer norm LSTM custom Op.Gravatar Jian Li2018-09-04
| | | | PiperOrigin-RevId: 211505721
* Fix Split, convert kernel signature to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-04
| | | | PiperOrigin-RevId: 211459453
* Introduce auxiliary input and allow "cross-linking" in the bidirectional RNN Op.Gravatar A. Unique TensorFlower2018-09-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This introduces a connection between forward and backward cells across subsequent layers when stacking bidirectional RNN Ops on top of each other. In more detail: Previously, the Op had only one input that was fed into the layer in the following way: INPUT (INPUT_REVERSED) | | --------------------- | FW_RNN BW_RNN | <----- bidi-RNN cell (with one input / two outpus) --------------------- | | FW_OUT BW_OUT Now, the Op can have an (optional) auxiliary input in the following way: AUX_INPUT (AUX_INPUT_REVERSED) | | INPUT | (INPUT_R'D.)| | | | | ----------------------- | \ / \ / | | FW_RNN BW_RNN | <----- bidi-RNN cell (with 2 inputs / 2 outpus) ----------------------- | | FW_OUT BW_OUT When stacking these Ops, previously, only the following flow was allowed: Input / \ FW_RNN1 BW_RNN1 | | | | FW_RNN2 BW RNN2 | | | | FW_RNN3 BW_RNN3 \ / Output With the introduction of an auxiliary input to the bidi-RNN layer, the forward (FW_RNNi) output of the ith layer is fed into as the input to the next layer (hence, inputs to both FW_RNN{i+1} and BW_RNN{i+1}) and the backward output is fed as the auxiliary inputs to both FW_RNN{i+1} and BW_RNN{i+1}). This way, the stacking can be changed to allow for the "cross-linking" between subsequent layer in the following way: Input / \ FW_RNN1 BW_RNN1 | \ / | | / \ | FW_RNN2 BW RNN2 | \ / | | / \ | FW_RNN3 BW_RNN3 \ / Output PiperOrigin-RevId: 211401475
* Fixes to hybrid conv. Add additional tests for pointwise conv.Gravatar Alan Chiao2018-08-31
| | | | PiperOrigin-RevId: 211085787
* Only code movement for pack/unpack.Gravatar A. Unique TensorFlower2018-08-30
| | | | PiperOrigin-RevId: 211028752