aboutsummaryrefslogtreecommitdiffhomepage
path: root/site/dev/testing
diff options
context:
space:
mode:
authorGravatar Hal Canary <halcanary@google.com>2016-11-11 11:40:04 -0500
committerGravatar Skia Commit-Bot <skia-commit-bot@chromium.org>2016-11-11 16:59:14 +0000
commit64dded3d95be625b65120a91ed29dd58112489d3 (patch)
tree10cb014a0f6790ddb548504585fd06fa8437cccd /site/dev/testing
parent58b130681db4432c4937c2cb1b2529de628c6b19 (diff)
Documentation: more gn, less gyp
GOLD_TRYBOT_URL= https://gold.skia.org/search?issue=4698 NOTRY=true DOCS_PREVIEW= https://skia.org/?cl=4698 Change-Id: I03100542752a769060a7f0c9671cc44acbea2e48 Reviewed-on: https://skia-review.googlesource.com/4698 Reviewed-by: Mike Klein <mtklein@chromium.org> Commit-Queue: Hal Canary <halcanary@google.com>
Diffstat (limited to 'site/dev/testing')
-rw-r--r--site/dev/testing/testing.md72
-rw-r--r--site/dev/testing/tests.md37
2 files changed, 62 insertions, 47 deletions
diff --git a/site/dev/testing/testing.md b/site/dev/testing/testing.md
index e577a51231..29f7a4dae0 100644
--- a/site/dev/testing/testing.md
+++ b/site/dev/testing/testing.md
@@ -4,11 +4,12 @@ Correctness Testing
Skia correctness testing is primarily served by a tool named DM.
This is a quickstart to building and running DM.
-~~~
-$ python bin/sync-and-gyp
-$ ninja -C out/Debug dm
-$ out/Debug/dm -v -w dm_output
-~~~
+<!--?prettify lang=sh?-->
+
+ python bin/sync
+ gn gen out/Debug
+ ninja -C out/Debug dm
+ out/Debug/dm -v -w dm_output
When you run this, you may notice your CPU peg to 100% for a while, then taper
off to 1 or 2 active cores as the run finishes. This is intentional. DM is
@@ -145,46 +146,47 @@ they happen and then again all together after everything is done running.
These failures are also included in the dm.json file.
DM has a simple facility to compare against the results of a previous run:
-~~~
-$ python bin/sync-and-gyp
-$ ninja -C out/Debug dm
-$ out/Debug/dm -w good
- # do some work
+<!--?prettify lang=sh?-->
+
+ ninja -C out/Debug dm
+ out/Debug/dm -w good
+
+ # do some work
+
+ ninja -C out/Debug dm
+ out/Debug/dm -r good -w bad
-$ python bin/sync-and-gyp
-$ ninja -C out/Debug dm
-$ out/Debug/dm -r good -w bad
-~~~
When using `-r`, DM will display a failure for any test that didn't produce the
same image as the `good` run.
For anything fancier, I suggest using skdiff:
-~~~
-$ python bin/sync-and-gyp
-$ ninja -C out/Debug dm
-$ out/Debug/dm -w good
- # do some work
+<!--?prettify lang=sh?-->
-$ python bin/sync-and-gyp
-$ ninja -C out/Debug dm
-$ out/Debug/dm -w bad
+ ninja -C out/Debug dm
+ out/Debug/dm -w good
-$ ninja -C out/Debug skdiff
-$ mkdir diff
-$ out/Debug/skdiff good bad diff
+ # do some work
- # open diff/index.html in your web browser
-~~~
+ ninja -C out/Debug dm
+ out/Debug/dm -w bad
+
+ ninja -C out/Debug skdiff
+ mkdir diff
+ out/Debug/skdiff good bad diff
+
+ # open diff/index.html in your web browser
That's the basics of DM. DM supports many other modes and flags. Here are a
few examples you might find handy.
-~~~
-$ out/Debug/dm --help # Print all flags, their defaults, and a brief explanation of each.
-$ out/Debug/dm --src tests # Run only unit tests.
-$ out/Debug/dm --nocpu # Test only GPU-backed work.
-$ out/Debug/dm --nogpu # Test only CPU-backed work.
-$ out/Debug/dm --match blur # Run only work with "blur" in its name.
-$ out/Debug/dm --dryRun # Don't really do anything, just print out what we'd do.
-~~~
+
+<!--?prettify lang=sh?-->
+
+ out/Debug/dm --help # Print all flags, their defaults, and a brief explanation of each.
+ out/Debug/dm --src tests # Run only unit tests.
+ out/Debug/dm --nocpu # Test only GPU-backed work.
+ out/Debug/dm --nogpu # Test only CPU-backed work.
+ out/Debug/dm --match blur # Run only work with "blur" in its name.
+ out/Debug/dm --dryRun # Don't really do anything, just print out what we'd do.
+
diff --git a/site/dev/testing/tests.md b/site/dev/testing/tests.md
index 3b216e88af..701c2c4dfd 100644
--- a/site/dev/testing/tests.md
+++ b/site/dev/testing/tests.md
@@ -5,6 +5,14 @@ Writing Skia Tests
+ [Rendering Tests](#gm)
+ [Benchmark Tests](#bench)
+We assume you have already synced Skia's dependecies and set up Skia's build system.
+
+<!--?prettify lang=sh?-->
+
+ python bin/sync
+ gn gen out/Debug
+ gn gen out/Release --args='is_debug=false'
+
<span id="test"></span>
Writing a Unit Test
@@ -29,9 +37,12 @@ Writing a Unit Test
REPORTER_ASSERT(reporter, lifeIsGood);
}
-2. Recompile and run test:
+2. Add `NewUnitTest.cpp` to `gn/tests.gni`.
+
+3. Recompile and run test:
+
+ <!--?prettify lang=sh?-->
- python bin/sync-and-gyp
ninja -C out/Debug dm
out/Debug/dm --match NewUnitTest
@@ -58,22 +69,22 @@ Writing a Rendering Test
canvas->drawLine(16, 16, 112, 112, p);
}
-2. Recompile and run test:
+2. Add `newgmtest.cpp` to `gn/gm.gni`.
+
+3. Recompile and run test:
+
+ <!--?prettify lang=sh?-->
- python bin/sync-and-gyp
ninja -C out/Debug dm
out/Debug/dm --match newgmtest
-3. Run the GM inside SampleApp:
+4. Run the GM inside SampleApp:
+
+ <!--?prettify lang=sh?-->
- python bin/sync-and-gyp
ninja -C out/Debug SampleApp
out/Debug/SampleApp --slide GM:newgmtest
- On MacOS, try this:
-
- out/Debug/SampleApp.app/Contents/MacOS/SampleApp --slide GM:newgmtest
-
<span id="bench"></span>
Writing a Benchmark Test
@@ -108,9 +119,11 @@ Writing a Benchmark Test
} // namespace
DEF_BENCH(return new FooBench;)
+2. Add `FooBench.cpp` to `gn/bench.gni`.
+
+3. Recompile and run nanobench:
-2. Recompile and run nanobench:
+ <!--?prettify lang=sh?-->
- python bin/sync-and-gyp
ninja -C out/Release nanobench
out/Release/nanobench --match Foo