aboutsummaryrefslogtreecommitdiffhomepage
path: root/tools/Stats.h
diff options
context:
space:
mode:
authorGravatar Mike Klein <mtklein@google.com>2014-07-16 19:59:32 -0400
committerGravatar Mike Klein <mtklein@google.com>2014-07-16 19:59:32 -0400
commit912947737a973421f4c58682b6171cb5ee00ad3a (patch)
tree87a3caef4916a894403f8d02edc0d64a9a945728 /tools/Stats.h
parent7ef21622b2ed6b9c5fc4c149cb62944fc191f054 (diff)
Use __rdtsc on Windows.
This seems to be ~100x higher resolution than QueryPerformanceCounter. AFAIK, all our Windows perf bots have constant_tsc, so we can be a bit more direct about using rdtsc directly: it'll always tick at the max CPU frequency. Now, the question remains, what is the max CPU frequency to divide through by? It looks like QueryPerformanceFrequency actually gives the CPU frequency in kHz, suspiciously exactly what we need to divide through to get elapsed milliseconds. That was a freebie. I did some before/after comparison on slow benchmarks. Timings look the same. Going to land this without review tonight to see what happens on the bots; happy to review carefully tomorrow. R=mtklein@google.com TBR=bungeman BUG=skia: Review URL: https://codereview.chromium.org/394363003
Diffstat (limited to 'tools/Stats.h')
-rw-r--r--tools/Stats.h4
1 files changed, 1 insertions, 3 deletions
diff --git a/tools/Stats.h b/tools/Stats.h
index 4fddc9bc18..8487a9497d 100644
--- a/tools/Stats.h
+++ b/tools/Stats.h
@@ -1,8 +1,6 @@
#ifndef Stats_DEFINED
#define Stats_DEFINED
-#include <math.h>
-
#include "SkString.h"
#include "SkTSort.h"
@@ -50,7 +48,7 @@ struct Stats {
s -= min;
s /= (max - min);
s *= (SK_ARRAY_COUNT(kBars) - 1);
- const size_t bar = (size_t)round(s);
+ const size_t bar = (size_t)(s + 0.5);
SK_ALWAYSBREAK(bar < SK_ARRAY_COUNT(kBars));
plot.append(kBars[bar]);
}