| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
| |
Rollback some quantization changes that breaks some models.
END_PUBLIC
Automated rollback of commit d3f14ef70cdf113f9d330c1f7c638003429a1dc4. Revert #19894.
PiperOrigin-RevId: 215678307
|
|
|
|
|
|
| |
Also remove add_arg_scope.
PiperOrigin-RevId: 215426187
|
|
|
|
| |
PiperOrigin-RevId: 214848057
|
|\
| |
| |
| | |
PiperOrigin-RevId: 214724610
|
| |
| |
| |
| | |
PiperOrigin-RevId: 213917881
|
| |
| |
| |
| |
| |
| | |
to be of the form 'scope/<arbitrary_text>'. Relax restriction to handle empty scopes. Enable this change to work for both fused and unfused batch norm layers
PiperOrigin-RevId: 213883621
|
| | |
|
|/
|
|
|
| |
The original tensor was not replaced with the quantized one when it had
already been quantized.
|
|
|
|
| |
PiperOrigin-RevId: 212997520
|
|
|
|
| |
PiperOrigin-RevId: 212023248
|
|
|
|
|
|
| |
training ops as described in https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantize. This test help check for graphs that have training ops and raises a value error.
PiperOrigin-RevId: 209103265
|
|
|
|
| |
PiperOrigin-RevId: 209082119
|
|
|
|
|
|
| |
gradients are quantized, quantization fails for any node that is a producer of gradients since the _FollowedByFakeQuant method prevents any node followed by a fake quant from being properly quantized.
PiperOrigin-RevId: 209010415
|
|
|
|
|
|
| |
FakeQuant
PiperOrigin-RevId: 208767575
|
|\
| |
| |
| | |
PiperOrigin-RevId: 207988541
|
| |
| |
| |
| |
| |
| | |
statistics were frozen, leading to poor accuracy. This fix freezes both batch mean and variance statistics
PiperOrigin-RevId: 207625559
|
| |
| |
| |
| |
| |
| | |
activation, with no bias or batch norm between the conv/matmul and the activation.
PiperOrigin-RevId: 207621304
|
| |
| |
| |
| |
| |
| | |
fake quant ops for weights in both depthwise and regular convolutions inside a seperable convolution op. Also insert fake quant ops for activations produced by first depthwise convolution
PiperOrigin-RevId: 207009650
|
|/ |
|
|
|
|
| |
PiperOrigin-RevId: 204148516
|
|
|
|
| |
PiperOrigin-RevId: 203515353
|
|
|
|
| |
PiperOrigin-RevId: 203037623
|
|
|
|
|
|
| |
Atrous convolutions are often DepthwiseConv2d operations preceded by SpaceToBatchND and followed by BatchToSpaceND operations. This change makes fold_batch_norms.py and quantize.py support handling this pattern.
PiperOrigin-RevId: 202353838
|
|
|
|
| |
PiperOrigin-RevId: 201110240
|
|
|
|
| |
PiperOrigin-RevId: 201033171
|
|
|
|
| |
PiperOrigin-RevId: 201011811
|
|
|
|
| |
PiperOrigin-RevId: 197189118
|
|
|
|
|
|
| |
non-scalar partition variables are supported.
PiperOrigin-RevId: 196584749
|
|
|
|
|
|
|
|
| |
of commutative operations.
#18919
PiperOrigin-RevId: 196276502
|
|
|
|
|
|
| |
Fixes #19014
PiperOrigin-RevId: 195443326
|
|
|
|
|
|
| |
shape, which can sometimes be missing.
PiperOrigin-RevId: 195209826
|
|
|
|
|
|
| |
folding.
PiperOrigin-RevId: 194465704
|
|
|
|
|
|
| |
allows folding to work for MobilenetV2 with unfused batch norms
PiperOrigin-RevId: 194116535
|
|
|
|
| |
PiperOrigin-RevId: 194031845
|
|
|
|
| |
PiperOrigin-RevId: 193999591
|
|
|
|
|
|
|
| |
This enables quantizing subgraphs of the entire graph. It's useful for networks
like Inception since we don't want to quantize the AuxLogits scope.
PiperOrigin-RevId: 192150416
|
|
|
|
|
|
| |
dependency from Assign ops.
PiperOrigin-RevId: 191965466
|
|
|
|
| |
PiperOrigin-RevId: 191959657
|
|
|
|
| |
PiperOrigin-RevId: 191068059
|
|
|
|
| |
PiperOrigin-RevId: 190878279
|
|
|
|
|
|
|
|
| |
- Allow multiple outputs of output_tensors in fold_batch_norms.
- Allow duplicate consumers in quantize.
- I also quick a fix issue for matching final layers that have batch norm.
PiperOrigin-RevId: 190873003
|
|
|
|
|
|
|
|
|
|
|
| |
- Guarantee that we do the largest matches first.
- Don't allow matching layers multiple times.
- Don't allow adding quantization ops to the same node multiple times.
- Return a list of match results rather than yielding. This is much easier to reason about.
- Only require ReadVariableOp when matching resource variables, since the input to ReadVariableOps don't necessarily have to be a VarHandleOp.
- Place post activation bypass ops quantization nodes in the same post activation bypass op's name scope for better viewing.
PiperOrigin-RevId: 190169622
|
|
|
|
| |
PiperOrigin-RevId: 189945839
|
|
|
|
|
|
| |
it up.
PiperOrigin-RevId: 189737746
|
|
|
|
| |
PiperOrigin-RevId: 189686219
|
|
|
|
| |
PiperOrigin-RevId: 189680481
|
|
|
|
|
|
| |
BatchNorm_Fold/batch_norm_correction.
PiperOrigin-RevId: 189358090
|
|
|
|
| |
PiperOrigin-RevId: 189302082
|
|
|
|
| |
PiperOrigin-RevId: 189258641
|
|
|
|
| |
PiperOrigin-RevId: 189231636
|