aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/compiler/xla/service/cpu/BUILD
diff options
context:
space:
mode:
authorGravatar Benjamin Kramer <kramerb@google.com>2018-07-25 11:08:39 -0700
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2018-07-25 11:11:42 -0700
commita5285d999af961367437d72285f67c4a5a2878d4 (patch)
tree9a5710a0f171b4aa486fcabc2e65a9ecc4238077 /tensorflow/compiler/xla/service/cpu/BUILD
parent0bc512505957e3685305b6a850f222c6eed88c7d (diff)
[XLA:GPU] Use a fast approximation for tanh
Just reuse the CPU implementation, which in turn is derived from Eigen. It claims to be accurate within +-1% which is good enough for fast math. Refactor the CPU implementation into a common file and remove the VectorSupportLibrary dependency (it's not needed). PiperOrigin-RevId: 206022260
Diffstat (limited to 'tensorflow/compiler/xla/service/cpu/BUILD')
-rw-r--r--tensorflow/compiler/xla/service/cpu/BUILD1
1 files changed, 1 insertions, 0 deletions
diff --git a/tensorflow/compiler/xla/service/cpu/BUILD b/tensorflow/compiler/xla/service/cpu/BUILD
index ace9f96cfb..71f7f985d0 100644
--- a/tensorflow/compiler/xla/service/cpu/BUILD
+++ b/tensorflow/compiler/xla/service/cpu/BUILD
@@ -444,6 +444,7 @@ cc_library(
deps = [
":vector_support_library",
"//tensorflow/compiler/xla/service/llvm_ir:llvm_util",
+ "//tensorflow/compiler/xla/service/llvm_ir:math_ops",
"//tensorflow/core:lib",
"@llvm//:core",
"@llvm//:transform_utils",