aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/contrib/quantize
Commit message (Collapse)AuthorAge
* BEGIN_PUBLICGravatar Mingxing Tan2018-10-03
| | | | | | | | | Rollback some quantization changes that breaks some models. END_PUBLIC Automated rollback of commit d3f14ef70cdf113f9d330c1f7c638003429a1dc4. Revert #19894. PiperOrigin-RevId: 215678307
* Remove dependency on contrib model_variable.Gravatar Suharsh Sivakumar2018-10-02
| | | | | | Also remove add_arg_scope. PiperOrigin-RevId: 215426187
* Do not specify dilation rate to depthwise conv2d.Gravatar Suharsh Sivakumar2018-09-27
| | | | PiperOrigin-RevId: 214848057
* Merge pull request #19894 from manipopopo:fix_quantizeGravatar TensorFlower Gardener2018-09-26
|\ | | | | | | PiperOrigin-RevId: 214724610
* | Update links to tf lite site.Gravatar Billy Lamberta2018-09-20
| | | | | | | | PiperOrigin-RevId: 213917881
* | Remove restriction on scope for bypass operators. Previously, the scope had ↵Gravatar Raghuraman Krishnamoorthi2018-09-20
| | | | | | | | | | | | to be of the form 'scope/<arbitrary_text>'. Relax restriction to handle empty scopes. Enable this change to work for both fused and unfused batch norm layers PiperOrigin-RevId: 213883621
| * Fix routing of delayed quantized tensorsGravatar manipopopo2018-09-20
| |
| * Fix routing of quantized tensorsGravatar manipopopo2018-09-20
|/ | | | | The original tensor was not replaced with the quantized one when it had already been quantized.
* Update description of contrib.quantizeGravatar Raghuraman Krishnamoorthi2018-09-14
| | | | PiperOrigin-RevId: 212997520
* Remove dependency on graph_editor.Gravatar Suharsh Sivakumar2018-09-07
| | | | PiperOrigin-RevId: 212023248
* Check for training ops in graph. The rewriter only works for graphs with no ↵Gravatar Raghuraman Krishnamoorthi2018-08-16
| | | | | | training ops as described in https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantize. This test help check for graphs that have training ops and raises a value error. PiperOrigin-RevId: 209103265
* Automated rollback of commit 394db95965e1d745f08b4eeb550878ddc175af15Gravatar Raghuraman Krishnamoorthi2018-08-16
| | | | PiperOrigin-RevId: 209082119
* Fixes issue where gradients are being quantized. In addition, because ↵Gravatar A. Unique TensorFlower2018-08-16
| | | | | | gradients are quantized, quantization fails for any node that is a producer of gradients since the _FollowedByFakeQuant method prevents any node followed by a fake quant from being properly quantized. PiperOrigin-RevId: 209010415
* The rewriter should not add nodes if followed pass-through op and then a ↵Gravatar Suharsh Sivakumar2018-08-14
| | | | | | FakeQuant PiperOrigin-RevId: 208767575
* Merge pull request #21086 from taehoonlee:fix_typosGravatar TensorFlower Gardener2018-08-08
|\ | | | | | | PiperOrigin-RevId: 207988541
* | Fix error in freezing batch norm variables. Previously, only batch mean ↵Gravatar Raghuraman Krishnamoorthi2018-08-06
| | | | | | | | | | | | statistics were frozen, leading to poor accuracy. This fix freezes both batch mean and variance statistics PiperOrigin-RevId: 207625559
* | Support quantization of layers with a conv/matmul followed by an ↵Gravatar Raghuraman Krishnamoorthi2018-08-06
| | | | | | | | | | | | activation, with no bias or batch norm between the conv/matmul and the activation. PiperOrigin-RevId: 207621304
* | Generalize quantization rewriter to handle seperable convolutions. Insert ↵Gravatar Raghuraman Krishnamoorthi2018-08-01
| | | | | | | | | | | | fake quant ops for weights in both depthwise and regular convolutions inside a seperable convolution op. Also insert fake quant ops for activations produced by first depthwise convolution PiperOrigin-RevId: 207009650
| * Fix typosGravatar Taehoon Lee2018-07-24
|/
* Expose quant_delay arg in experimental_create_eval_graph function.Gravatar A. Unique TensorFlower2018-07-11
| | | | PiperOrigin-RevId: 204148516
* Support extra identity operation added if batch_norms updates are forced.Gravatar Suharsh Sivakumar2018-07-06
| | | | PiperOrigin-RevId: 203515353
* Merge changes from github.Gravatar Yifei Feng2018-07-02
| | | | PiperOrigin-RevId: 203037623
* Support quantizing atrous convolutions.Gravatar Suharsh Sivakumar2018-06-27
| | | | | | Atrous convolutions are often DepthwiseConv2d operations preceded by SpaceToBatchND and followed by BatchToSpaceND operations. This change makes fold_batch_norms.py and quantize.py support handling this pattern. PiperOrigin-RevId: 202353838
* Merge changes from github.Gravatar Akshay Modi2018-06-18
| | | | PiperOrigin-RevId: 201110240
* Automated g4 rollback of changelist 201011811Gravatar Akshay Modi2018-06-18
| | | | PiperOrigin-RevId: 201033171
* Merge changes from github.Gravatar Akshay Modi2018-06-18
| | | | PiperOrigin-RevId: 201011811
* The quantizer should match the patterns for partition variables.Gravatar Suharsh Sivakumar2018-05-18
| | | | PiperOrigin-RevId: 197189118
* Make sure that variables aren't created as partition variables since only ↵Gravatar Suharsh Sivakumar2018-05-14
| | | | | | non-scalar partition variables are supported. PiperOrigin-RevId: 196584749
* Introduce ordered_inputs option to graph_matcher to allow simpler matching ↵Gravatar Suharsh Sivakumar2018-05-11
| | | | | | | | of commutative operations. #18919 PiperOrigin-RevId: 196276502
* Add operations before Identity operations should be quantized.Gravatar Suharsh Sivakumar2018-05-04
| | | | | | Fixes #19014 PiperOrigin-RevId: 195443326
* Use tensorflow size to determine number of elements instead of the static ↵Gravatar Suharsh Sivakumar2018-05-03
| | | | | | shape, which can sometimes be missing. PiperOrigin-RevId: 195209826
* Handle variations in scoping of batch norms for correct unfused batch norm ↵Gravatar Raghuraman Krishnamoorthi2018-04-26
| | | | | | folding. PiperOrigin-RevId: 194465704
* Improve handling of scopes in folding unfused batch norms. This change ↵Gravatar Raghuraman Krishnamoorthi2018-04-24
| | | | | | allows folding to work for MobilenetV2 with unfused batch norms PiperOrigin-RevId: 194116535
* Merge changes from github.Gravatar Yifei Feng2018-04-23
| | | | PiperOrigin-RevId: 194031845
* FakeQuant operations before ReLUs (occurs after bypass nodes) aren't needed.Gravatar Suharsh Sivakumar2018-04-23
| | | | PiperOrigin-RevId: 193999591
* Add `scope` parameter in experimental Quantization API.Gravatar Yu-Cheng Ling2018-04-09
| | | | | | | This enables quantizing subgraphs of the entire graph. It's useful for networks like Inception since we don't want to quantize the AuxLogits scope. PiperOrigin-RevId: 192150416
* We no longer need updates_collections in quant ops since we rely on the data ↵Gravatar Suharsh Sivakumar2018-04-06
| | | | | | dependency from Assign ops. PiperOrigin-RevId: 191965466
* Update docs to include the most relevant paper.Gravatar Suharsh Sivakumar2018-04-06
| | | | PiperOrigin-RevId: 191959657
* Fix a crash in Quantize() when tf.contrib.framework.get_name_scope() == None.Gravatar A. Unique TensorFlower2018-03-30
| | | | PiperOrigin-RevId: 191068059
* Remove all_opensource_files. It's not needed any more.Gravatar Martin Wicke2018-03-28
| | | | PiperOrigin-RevId: 190878279
* Relax limitations on rerouting graph outputs.Gravatar Suharsh Sivakumar2018-03-28
| | | | | | | | - Allow multiple outputs of output_tensors in fold_batch_norms. - Allow duplicate consumers in quantize. - I also quick a fix issue for matching final layers that have batch norm. PiperOrigin-RevId: 190873003
* Improvements to quantization matching code:Gravatar Suharsh Sivakumar2018-03-22
| | | | | | | | | | | - Guarantee that we do the largest matches first. - Don't allow matching layers multiple times. - Don't allow adding quantization ops to the same node multiple times. - Return a list of match results rather than yielding. This is much easier to reason about. - Only require ReadVariableOp when matching resource variables, since the input to ReadVariableOps don't necessarily have to be a VarHandleOp. - Place post activation bypass ops quantization nodes in the same post activation bypass op's name scope for better viewing. PiperOrigin-RevId: 190169622
* Merge changes from github.Gravatar Jacques Pienaar2018-03-21
| | | | PiperOrigin-RevId: 189945839
* Drop name_scope from operation names during quantization to avoid doubling ↵Gravatar A. Unique TensorFlower2018-03-20
| | | | | | it up. PiperOrigin-RevId: 189737746
* Quantize bypasses after activations.Gravatar Suharsh Sivakumar2018-03-19
| | | | PiperOrigin-RevId: 189686219
* Disable freeze_bn_delay by default.Gravatar Suharsh Sivakumar2018-03-19
| | | | PiperOrigin-RevId: 189680481
* Fix naming BatchNorm_Fold//batch_norm_correction -> ↵Gravatar A. Unique TensorFlower2018-03-16
| | | | | | BatchNorm_Fold/batch_norm_correction. PiperOrigin-RevId: 189358090
* Don't put quantization variables in EMA collection by default.Gravatar Suharsh Sivakumar2018-03-15
| | | | PiperOrigin-RevId: 189302082
* Automated g4 rollback of changelist 189231636Gravatar A. Unique TensorFlower2018-03-15
| | | | PiperOrigin-RevId: 189258641
* Merge changes from github.Gravatar Jacques Pienaar2018-03-15
| | | | PiperOrigin-RevId: 189231636