aboutsummaryrefslogtreecommitdiffhomepage
path: root/tools/run_tests/performance/README.md
blob: 791270ab389cc947e305e69e4c94918cfaa22089 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
# Overview of performance test suite, with steps for manual runs:

For design of the tests, see
https://grpc.io/docs/guides/benchmarking.html.

## Pre-reqs for running these manually:
In general the benchmark workers and driver build scripts expect
[linux_performance_worker_init.sh](../../gce/linux_performance_worker_init.sh) to have been ran already.

### To run benchmarks locally:
* From the grpc repo root, start the
[run_performance_tests.py](../run_performance_tests.py) runner script.

### On remote machines, to start the driver and workers manually:
The [run_performance_test.py](../run_performance_tests.py) top-level runner script can also
be used with remote machines, but for e.g., profiling the server,
it might be useful to run workers manually.

1. You'll need a "driver" and separate "worker" machines.
For example, you might use one GCE "driver" machine and 3 other
GCE "worker" machines that are in the same zone.

2. Connect to each worker machine and start up a benchmark worker with a "driver_port".
  * For example, to start the grpc-go benchmark worker:
  [grpc-go worker main.go](https://github.com/grpc/grpc-go/blob/master/benchmark/worker/main.go) --driver_port <driver_port>

#### Comands to start workers in different languages:
 * Note that these commands are what the top-level
   [run_performance_test.py](../run_performance_tests.py) script uses to
   build and run different workers through the
   [build_performance.sh](./build_performance.sh) script and "run worker"
   scripts (such as the [run_worker_java.sh](./run_worker_java.sh)).

##### Running benchmark workers for C-core wrapped languages (C++, Python, C#, Node, Ruby):
   * These are more simple since they all live in the main grpc repo.

```
$ cd <grpc_repo_root>
$ tools/run_tests/performance/build_performance.sh
$ tools/run_tests/performance/run_worker_<language>.sh
```

   * Note that there is one "run_worker" script per language, e.g.,
     [run_worker_csharp.sh](./run_worker_csharp.sh) for c#.

##### Running benchmark workers for gRPC-Java:
   * You'll need the [grpc-java](https://github.com/grpc/grpc-java) repo.

```
$ cd <grpc-java-repo>
$ ./gradlew -PskipCodegen=true :grpc-benchmarks:installDist
$ benchmarks/build/install/grpc-benchmarks/bin/benchmark_worker --driver_port <driver_port>
```

##### Running benchmark workers for gRPC-Go:
   * You'll need the [grpc-go repo](https://github.com/grpc/grpc-go)

```
$ cd <grpc-go-repo>/benchmark/worker && go install
$ # if profiling, it might be helpful to turn off inlining by building with "-gcflags=-l"
$ $GOPATH/bin/worker --driver_port <driver_port>
```

#### Build the driver:
* Connect to the driver machine (if using a remote driver) and from the grpc repo root:
```
$ tools/run_tests/performance/build_performance.sh
```

#### Run the driver:
1. Get the 'scenario_json' relevant for the scenario to run. Note that "scenario
  json" configs are generated from [scenario_config.py](./scenario_config.py).
  The [driver](../../../test/cpp/qps/qps_json_driver.cc) takes a list of these configs as a json string of the form: `{scenario: <json_list_of_scenarios> }`
  in its `--scenarios_json` command argument.
  One quick way to get a valid json string to pass to the driver is by running
  the [run_performance_tests.py](./run_performance_tests.py) locally and copying the logged scenario json command arg.

2. From the grpc repo root:

* Set `QPS_WORKERS` environment variable to a comma separated list of worker
machines. Note that the driver will start the "benchmark server" on the first
entry in the list, and the rest will be told to run as clients against the
benchmark server.

Example running and profiling of go benchmark server:
```
$ export QPS_WORKERS=<host1>:<10000>,<host2>,10000,<host3>:10000
$ bins/opt/qps_json_driver --scenario_json='<scenario_json_scenario_config_string>'
```

### Example profiling commands

While running the benchmark, a profiler can be attached to the server.

Example to count syscalls in grpc-go server during a benchmark:
* Connect to server machine and run:
```
$ netstat -tulpn | grep <driver_port> # to get pid of worker
$ perf stat -p <worker_pid> -e syscalls:sys_enter_write # stop after test complete
```

Example memory profile of grpc-go server, with `go tools pprof`:
* After a run is done on the server, see its alloc profile with:
```
$ go tool pprof --text --alloc_space http://localhost:<pprof_port>/debug/heap
```

### Configuration environment variables:

* QPS_WORKER_CHANNEL_CONNECT_TIMEOUT

  Consuming process: qps_worker

  Type: integer (number of seconds)

  This can be used to configure the amount of time that benchmark
  clients wait for channels to the benchmark server to become ready.
  This is useful in certain benchmark environments in which the
  server can take a long time to become ready. Note: if setting
  this to a high value, then the scenario config under test should
  probably also have a large "warmup_seconds".

* QPS_WORKERS

  Consuming process: qps_json_driver

  Type: comma separated list of host:port

  Set this to a comma separated list of QPS worker processes/machines.
  Each scenario in a scenario config has specifies a certain number
  of servers, `num_servers`, and the driver will start
  "benchmark servers"'s on the first `num_server` `host:port` pairs in
  the comma separated list. The rest will be told to run as clients
  against the benchmark server.