aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src/community/benchmarks.md
blob: 153ef4a015d475b4694f0acd8aea971bbd250798 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
# Defining and Running Benchmarks

This guide contains instructions for defining and running a TensorFlow benchmark. These benchmarks store output in [TestResults](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/util/test_log.proto) format. If these benchmarks are added to the TensorFlow github repo, we will run them daily with our continuous build and display a graph on our dashboard: https://benchmarks-dot-tensorflow-testing.appspot.com/.

[TOC]


## Defining a Benchmark

Defining a TensorFlow benchmark requires extending the `tf.test.Benchmark`
class and calling the `self.report_benchmark` method. Below, you'll find an example of benchmark code:

```python
import time

import tensorflow as tf


# Define a class that extends from tf.test.Benchmark.
class SampleBenchmark(tf.test.Benchmark):

  # Note: benchmark method name must start with `benchmark`.
  def benchmarkSum(self):
    with tf.Session() as sess:
      x = tf.constant(10)
      y = tf.constant(5)
      result = tf.add(x, y)

      iters = 100
      start_time = time.time()
      for _ in range(iters):
        sess.run(result)
      total_wall_time = time.time() - start_time

      # Call report_benchmark to report a metric value.
      self.report_benchmark(
          name="sum_wall_time",
          # This value should always be per iteration.
          wall_time=total_wall_time/iters,
          iters=iters)

if __name__ == "__main__":
  tf.test.main()
```
See the full example for [SampleBenchmark](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/benchmark/).


Key points to note in the example above:

* Benchmark class extends from `tf.test.Benchmark`.
* Each benchmark method should start with `benchmark` prefix.
* Benchmark method calls `report_benchmark` to report the metric value.


## Running with Python

Use the `--benchmarks` flag to run the benchmark with Python. A [BenchmarkEntries](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/util/test_log.proto) proto will be printed.

```
python sample_benchmark.py --benchmarks=SampleBenchmark
```

Setting the flag as `--benchmarks=.` or `--benchmarks=all` works as well.

(Please ensure that Tensorflow is installed to successfully import the package in the line `import tensorflow as tf`. For installation instructions, see [Installing TensorFlow](https://www.tensorflow.org/install/). This step is not necessary when running with Bazel.)


## Adding a `bazel` Target

We have a special target called `tf_py_logged_benchmark` for benchmarks defined under the TensorFlow github repo. `tf_py_logged_benchmark` should wrap around a regular `py_test` target. Running a `tf_py_logged_benchmark` would print a [TestResults](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/util/test_log.proto) proto. Defining a `tf_py_logged_benchmark` also lets us run it with TensorFlow continuous build.

First, define a regular `py_test` target. See example below:

```build
py_test(
  name = "sample_benchmark",
  srcs = ["sample_benchmark.py"],
  srcs_version = "PY2AND3",
  deps = [
    "//tensorflow:tensorflow_py",
  ],
)
```

You can run benchmarks in a `py_test` target by passing the `--benchmarks` flag. The benchmark should just print out a [BenchmarkEntries](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/util/test_log.proto) proto.

```shell
bazel test :sample_benchmark --test_arg=--benchmarks=all
```


Now, add the `tf_py_logged_benchmark` target (if available). This target would
pass in `--benchmarks=all` to the wrapped `py_test` target and provide a way to store output for our TensorFlow continuous build. The target `tf_py_logged_benchmark` should be available in TensorFlow repository.

```build
load("//tensorflow/tools/test:performance.bzl", "tf_py_logged_benchmark")

tf_py_logged_benchmark(
    name = "sample_logged_benchmark",
    target = "//tensorflow/examples/benchmark:sample_benchmark",
)
```

Use the following command to run the benchmark target:

```shell
bazel test :sample_logged_benchmark
```