aboutsummaryrefslogtreecommitdiffhomepage
path: root/site/dev/testing
diff options
context:
space:
mode:
authorGravatar Andrew Monshizadeh <amonshiz@fb.com>2018-01-10 09:55:05 -0500
committerGravatar Skia Commit-Bot <skia-commit-bot@chromium.org>2018-01-11 19:47:58 +0000
commit9d6681cc70095b5be6523873a9de54b02ec65086 (patch)
treebd92d441e53c164e9bfbfb94ee3ade8ba89902fd /site/dev/testing
parentd75fdc64be5cbbc11660ea4e6e9be4b84e407c79 (diff)
Changes to site documentation
Mostly just formatting fixes with a few grammatical changes. Two real notable changes: - Removed references to SkGLCanvas from Tips & FAQ and replaced with references to `SkDevice` and `SkSurface`. - Deleted deprecated "Quick Start Guides" folder Docs-Preview: https://skia.org/?cl=92361 Bug: skia: Change-Id: Ief790b1c2bae8fe0e39aa8d66c79f80560d18c9e Reviewed-on: https://skia-review.googlesource.com/92361 Reviewed-by: Heather Miller <hcm@google.com> Reviewed-by: Joe Gregorio <jcgregorio@google.com> Commit-Queue: Joe Gregorio <jcgregorio@google.com>
Diffstat (limited to 'site/dev/testing')
-rw-r--r--site/dev/testing/automated_testing.md14
-rw-r--r--site/dev/testing/fonts.md4
-rw-r--r--site/dev/testing/ios.md3
-rw-r--r--site/dev/testing/skiaperf.md2
-rw-r--r--site/dev/testing/testing.md39
5 files changed, 33 insertions, 29 deletions
diff --git a/site/dev/testing/automated_testing.md b/site/dev/testing/automated_testing.md
index 8fb64cc8ad..ecdd484e73 100644
--- a/site/dev/testing/automated_testing.md
+++ b/site/dev/testing/automated_testing.md
@@ -20,18 +20,18 @@ may automatically retry tasks within its set limits. Jobs are not retried.
Multiple jobs may share the same task, for example, tests on two different
Android devices which use the same compiled code.
-Each Skia repository has an infra/bots/tasks.json file which defines the jobs
+Each Skia repository has an `infra/bots/tasks.json` file which defines the jobs
and tasks for the repo. Most jobs will run at every commit, but it is possible
to specify nightly and weekly jobs as well. For convenience, most repos also
-have a gen_tasks.go which will generate tasks.json. You will need to
+have a `gen_tasks.go` which will generate `tasks.json`. You will need to
[install Go](https://golang.org/doc/install). From the repository root:
$ go get -u go.skia.org/infra/...
$ go run infra/bots/gen_tasks.go
-It is necessary to run gen_tasks.go every time it is changed or every time an
+It is necessary to run `gen_tasks.go` every time it is changed or every time an
[asset](https://skia.googlesource.com/skia/+/master/infra/bots/assets/README.md)
-has changed. There is also a test mode which simply verifies that the tasks.json
+has changed. There is also a test mode which simply verifies that the `tasks.json`
file is up to date:
$ go run infra/bots/gen_tasks.go --test
@@ -44,12 +44,12 @@ Try Jobs
Skia's trybots allow testing and verification of changes before they land in the
repo. You need to have permission to trigger try jobs; if you need permission,
ask a committer. After uploading your CL to [Gerrit](https://skia-review.googlesource.com/),
-you may trigger a try job for any job listed in tasks.json, either via the
-Gerrit UI, using "git cl try", eg.
+you may trigger a try job for any job listed in `tasks.json`, either via the
+Gerrit UI, using `git cl try`, eg.
git cl try -B skia.primary -b Some-Tryjob-Name
-or using bin/try, a small wrapper for "git cl try" which helps to choose try jobs.
+or using `bin/try`, a small wrapper for `git cl try` which helps to choose try jobs.
From a Skia checkout:
bin/try --list
diff --git a/site/dev/testing/fonts.md b/site/dev/testing/fonts.md
index 15e7727041..cb2a9e57ee 100644
--- a/site/dev/testing/fonts.md
+++ b/site/dev/testing/fonts.md
@@ -29,6 +29,6 @@ SkTypeface* typeface = sk_tool_utils::create_portable_typeface(const char* name,
SkFontStyle style);
~~~~
-Eventually, both 'set_portable_typeface()' and 'create_portable_typeface()' will be
-removed. Instead, a test-wide 'SkFontMgr' will be selected to choose portable
+Eventually, both `set_portable_typeface()` and `create_portable_typeface()` will be
+removed. Instead, a test-wide `SkFontMgr` will be selected to choose portable
fonts or resource fonts.
diff --git a/site/dev/testing/ios.md b/site/dev/testing/ios.md
index e6535e17ea..5ae5368fc6 100644
--- a/site/dev/testing/ios.md
+++ b/site/dev/testing/ios.md
@@ -24,6 +24,7 @@ Follow these steps to install them:
(Note: All these are part of the *libimobiledevice* project but packaged/developed
under different names. The *cask* extension to *brew* is necessary to install
*osxfuse* and *ifuse*, which allows to mount the application directory on an iOS device).
+
```
brew install libimobiledevice
brew install ideviceinstaller
@@ -31,7 +32,9 @@ brew install caskroom/cask/brew-cask
brew install Caskroom/cask/osxfuse
brew install ifuse
```
+
* Install node.js and ios-deploy
+
```
$ brew update
$ brew install node
diff --git a/site/dev/testing/skiaperf.md b/site/dev/testing/skiaperf.md
index 921df2e987..005b30b044 100644
--- a/site/dev/testing/skiaperf.md
+++ b/site/dev/testing/skiaperf.md
@@ -8,7 +8,7 @@ infrastructure.
<img src=Perf.png style="margin-left:30px" align="left" width="800"/> <br clear="left">
Skia tests across a large number of platforms and configurations, and each
-commit to Skia generates 240,000 individual values are sent to Perf,
+commit to Skia generates 240,000 individual values that are sent to Perf,
consisting mostly of performance benchmark results, but also including memory
and coverage data.
diff --git a/site/dev/testing/testing.md b/site/dev/testing/testing.md
index d58abfb400..6129550061 100644
--- a/site/dev/testing/testing.md
+++ b/site/dev/testing/testing.md
@@ -82,30 +82,31 @@ stand alone. A couple thousand tasks is pretty normal. Let's look at the
status line for one of those tasks.
~~~
( 25MB 1857) 1.36ms 8888 image mandrill_132x132_12x12.astc-5-subsets
+ [1] [2] [3] [4]
~~~
This status line tells us several things.
-First, it tells us that at the time we wrote the status line, the maximum
-amount of memory DM had ever used was 25MB. Note this is a high water mark,
-not the current memory usage. This is mostly useful for us to track on our
-buildbots, some of which run perilously close to the system memory limit.
+ 1. The maximum amount of memory DM had ever used was 25MB. Note this is a
+ high water mark, not the current memory usage. This is mostly useful for us
+ to track on our buildbots, some of which run perilously close to the system
+ memory limit.
-Next, the status line tells us that there are 1857 unfinished tasks, either
-currently running or waiting to run. We generally run one task per hardware
-thread available, so on a typical laptop there are probably 4 or 8 running at
-once. Sometimes the counts appear to show up out of order, particularly at DM
-startup; it's harmless, and doesn't affect the correctness of the run.
+ 2. The number of unfinished tasks, in this example there are 1857, either
+ currently running or waiting to run. We generally run one task per hardware
+ thread available, so on a typical laptop there are probably 4 or 8 running at
+ once. Sometimes the counts appear to show up out of order, particularly at DM
+ startup; it's harmless, and doesn't affect the correctness of the run.
-Next, we see this task took 1.36 milliseconds to run. Generally, the precision
-of this timer is around 1 microsecond. The time is purely there for
-informational purposes, to make it easier for us to find slow tests.
+ 3. Next, we see this task took 1.36 milliseconds to run. Generally, the
+ precision of this timer is around 1 microsecond. The time is purely there for
+ informational purposes, to make it easier for us to find slow tests.
-Finally we see the configuration and name of the test we ran. We drew the test
-"mandrill_132x132_12x12.astc-5-subsets", which is an "image" source, into an
-"8888" sink.
+ 4. The configuration and name of the test we ran. We drew the test
+ "mandrill_132x132_12x12.astc-5-subsets", which is an "image" source, into an
+ "8888" sink.
-When DM finishes running, you should find a directory with file named dm.json,
+When DM finishes running, you should find a directory with file named `dm.json`,
and some nested directories filled with lots of images.
~~~
$ ls dm_output
@@ -127,9 +128,9 @@ dm_output/8888/gm/bezier_quad_effects.png
The directories are nested first by sink type (`--config`), then by source type (`--src`).
The image from the task we just looked at, "8888 image mandrill_132x132_12x12.astc-5-subsets",
-can be found at dm_output/8888/image/mandrill_132x132_12x12.astc-5-subsets.png.
+can be found at `dm_output/8888/image/mandrill_132x132_12x12.astc-5-subsets.png`.
-dm.json is used by our automated testing system, so you can ignore it if you
+`dm.json` is used by our automated testing system, so you can ignore it if you
like. It contains a listing of each test run and a checksum of the image
generated for that run.
@@ -142,7 +143,7 @@ the same exact .png, but have their checksums differ.
Unit tests don't generally output anything but a status update when they pass.
If a test fails, DM will print out its assertion failures, both at the time
they happen and then again all together after everything is done running.
-These failures are also included in the dm.json file.
+These failures are also included in the `dm.json` file.
DM has a simple facility to compare against the results of a previous run: