aboutsummaryrefslogtreecommitdiffhomepage
path: root/benchmarks/benchmarks.proto
diff options
context:
space:
mode:
authorGravatar Josh Haberman <jhaberman@gmail.com>2016-04-27 18:22:22 -0700
committerGravatar Josh Haberman <jhaberman@gmail.com>2016-04-27 18:22:22 -0700
commit2e83110230b7e91b07835e9c718a1d6fbcb8b617 (patch)
tree85737c7424dab1c232d95665c584d1a69fd2f992 /benchmarks/benchmarks.proto
parentf53f911793c3024976f80211e0c976f5cc51f88d (diff)
Added framework for generating/consuming benchmarking data sets.
This takes the code that was sitting in benchmarks/ already and makes it easier for language-specific benchmarks to consume. Future PRs will enhance this so that the language-specific benchmarks can report metrics back that will be tracked over time in PerfKit.
Diffstat (limited to 'benchmarks/benchmarks.proto')
-rw-r--r--benchmarks/benchmarks.proto102
1 files changed, 102 insertions, 0 deletions
diff --git a/benchmarks/benchmarks.proto b/benchmarks/benchmarks.proto
new file mode 100644
index 00000000..a891eb9e
--- /dev/null
+++ b/benchmarks/benchmarks.proto
@@ -0,0 +1,102 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+package benchmarks;
+option java_package = "com.google.protobuf.benchmarks";
+
+message BenchmarkDataset {
+ // Name of the benchmark dataset. This should be unique across all datasets.
+ // Should only contain word characters: [a-zA-Z0-9_]
+ string name = 1;
+
+ // Fully-qualified name of the protobuf message for this dataset.
+ // It will be one of the messages defined benchmark_messages.proto.
+ // Implementations that do not support reflection can implement this with
+ // an explicit "if/else" chain that lists every possible message defined
+ // in this file.
+ string message_name = 2;
+
+ // The payload(s) for this dataset. They should be parsed or serialized
+ // in sequence, in a loop, ie.
+ //
+ // while (!benchmarkDone) { // Benchmark runner decides when to exit.
+ // for (i = 0; i < benchmark.payload.length; i++) {
+ // parse(benchmark.payload[i])
+ // }
+ // }
+ //
+ // This is intended to let datasets include a variety of data to provide
+ // potentially more realistic results than just parsing the same message
+ // over and over. A single message parsed repeatedly could yield unusually
+ // good branch prediction performance.
+ repeated bytes payload = 3;
+}
+
+// A benchmark can write out metrics that we will then upload to our metrics
+// database for tracking over time.
+message Metric {
+ // A unique ID for these results. Used for de-duping.
+ string guid = 1;
+
+ // The tags specify exactly what benchmark was run against the dataset.
+ // The specific benchmark suite can decide what these mean, but here are
+ // some common tags that have a predefined meaning:
+ //
+ // - "dataset": for tests that pertain to a specific dataset.
+ //
+ // For example:
+ //
+ // # Tests parsing from binary proto string using arenas.
+ // tags={
+ // dataset: "testalltypes",
+ // op: "parse",
+ // format: "binaryproto",
+ // input: "string"
+ // arena: "true"
+ // }
+ //
+ // # Tests serializing to JSON string.
+ // tags={
+ // dataset: "testalltypes",
+ // op: "serialize",
+ // format: "json",
+ // input: "string"
+ // }
+ map<string, string> labels = 2;
+
+ // Unit of measurement for the metric:
+ // - a speed test might be "mb_per_second" or "ops_per_second"
+ // - a size test might be "kb".
+ string unit = 3;
+
+ // Metric value.
+ double value = 4;
+}