mirror of
https://github.com/flutter/flutter.git
synced 2025-06-03 00:51:18 +00:00
Formatted and removed lints from devicelab README.md (#117239)
This commit is contained in:
parent
ebeb491895
commit
93c581a72e
@ -7,29 +7,36 @@ the tests are referred to as "tasks" in the API, but since we primarily use it
|
||||
for testing, this document refers to them as "tests".
|
||||
|
||||
Current statuses for the devicelab are available at
|
||||
https://flutter-dashboard.appspot.com/#/build. See [dashboard user guide](https://github.com/flutter/cocoon/blob/master/app_flutter/USER_GUIDE.md)
|
||||
<https://flutter-dashboard.appspot.com/#/build>. See [dashboard user
|
||||
guide](https://github.com/flutter/cocoon/blob/master/app_flutter/USER_GUIDE.md)
|
||||
for information on using the dashboards.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
* [How the DeviceLab runs tests](#how-the-devicelab-runs-tests)
|
||||
* [Running tests locally](#running-tests-locally)
|
||||
* [Writing tests](#writing-tests)
|
||||
* [Adding tests to continuous integration](#adding-tests-to-continuous-integration)
|
||||
* [Adding tests to continuous
|
||||
integration](#adding-tests-to-continuous-integration)
|
||||
* [Adding tests to presubmit](#adding-tests-to-presubmit)
|
||||
|
||||
|
||||
## How the DeviceLab runs tests
|
||||
|
||||
DeviceLab tests are run against physical devices in Flutter's lab (the "DeviceLab").
|
||||
DeviceLab tests are run against physical devices in Flutter's lab (the
|
||||
"DeviceLab").
|
||||
|
||||
Tasks specify the type of device they are to run on (`linux_android`, `mac_ios`, `mac_android`, `windows_android`, etc).
|
||||
When a device in the lab is free, it will pickup tasks that need to be completed.
|
||||
Tasks specify the type of device they are to run on (`linux_android`, `mac_ios`,
|
||||
`mac_android`, `windows_android`, etc). When a device in the lab is free, it
|
||||
will pickup tasks that need to be completed.
|
||||
|
||||
1. If the task succeeds, the test runner reports the success and uploads its performance metrics to Flutter's infrastructure. Not
|
||||
all tasks record performance metrics.
|
||||
2. If task fails, an auto rerun happens. Whenever the last run succeeds, the task will be reported as a success. For this case,
|
||||
a flake will be flagged and populated to the test result.
|
||||
3. If the task fails in all reruns, the test runner reports the failure to Flutter's infrastructure and no performance metrics are collected
|
||||
1. If the task succeeds, the test runner reports the success and uploads its
|
||||
performance metrics to Flutter's infrastructure. Not all tasks record
|
||||
performance metrics.
|
||||
2. If task fails, an auto rerun happens. Whenever the last run succeeds, the
|
||||
task will be reported as a success. For this case, a flake will be flagged and
|
||||
populated to the test result.
|
||||
3. If the task fails in all reruns, the test runner reports the failure to
|
||||
Flutter's infrastructure and no performance metrics are collected
|
||||
|
||||
## Running tests locally
|
||||
|
||||
@ -63,10 +70,11 @@ To run a test, use option `-t` (`--task`):
|
||||
|
||||
Where `NAME_OR_PATH_OF_TEST` can be either of:
|
||||
|
||||
- the _name_ of a task, which is a file's basename in `bin/tasks`. Example: `complex_layout__start_up`.
|
||||
- the path to a Dart _file_ corresponding to a task, which resides in `bin/tasks`.
|
||||
Tip: most shells support path auto-completion using the Tab key. Example:
|
||||
`bin/tasks/complex_layout__start_up.dart`.
|
||||
* the _name_ of a task, which is a file's basename in `bin/tasks`. Example:
|
||||
`complex_layout__start_up`.
|
||||
* the path to a Dart _file_ corresponding to a task, which resides in
|
||||
`bin/tasks`. Tip: most shells support path auto-completion using the Tab key.
|
||||
Example: `bin/tasks/complex_layout__start_up.dart`.
|
||||
|
||||
To run multiple tests, repeat option `-t` (`--task`) multiple times:
|
||||
|
||||
@ -107,19 +115,19 @@ Example:
|
||||
|
||||
The `--ab=10` tells the runner to run an A/B test 10 times.
|
||||
|
||||
`--local-engine=host_debug_unopt` tells the A/B test to use the `host_debug_unopt`
|
||||
engine build. `--local-engine` is required for A/B test.
|
||||
`--local-engine=host_debug_unopt` tells the A/B test to use the
|
||||
`host_debug_unopt` engine build. `--local-engine` is required for A/B test.
|
||||
|
||||
`--ab-result-file=filename` can be used to provide an alternate location to output
|
||||
the JSON results file (defaults to `ABresults#.json`). A single `#` character can be
|
||||
used to indicate where to insert a serial number if a file with that name already
|
||||
exists, otherwise, the file will be overwritten.
|
||||
`--ab-result-file=filename` can be used to provide an alternate location to
|
||||
output the JSON results file (defaults to `ABresults#.json`). A single `#`
|
||||
character can be used to indicate where to insert a serial number if a file with
|
||||
that name already exists, otherwise, the file will be overwritten.
|
||||
|
||||
A/B can run exactly one task. Multiple tasks are not supported.
|
||||
|
||||
Example output:
|
||||
|
||||
```
|
||||
```text
|
||||
Score Average A (noise) Average B (noise) Speed-up
|
||||
bench_card_infinite_scroll.canvaskit.drawFrameDuration.average 2900.20 (8.44%) 2426.70 (8.94%) 1.20x
|
||||
bench_card_infinite_scroll.canvaskit.totalUiFrame.average 4964.00 (6.29%) 4098.00 (8.03%) 1.21x
|
||||
@ -142,13 +150,14 @@ Summarize tool example:
|
||||
ABresults.json ABresults1.json ABresults2.json ...
|
||||
```
|
||||
|
||||
`--[no-]tsv-table` tells the tool to print the summary in a table with tabs for easy spreadsheet
|
||||
entry. (defaults to on)
|
||||
`--[no-]tsv-table` tells the tool to print the summary in a table with tabs for
|
||||
easy spreadsheet entry. (defaults to on)
|
||||
|
||||
`--[no-]raw-summary` tells the tool to print all per-run data collected by the A/B test formatted
|
||||
with tabs for easy spreadsheet entry. (defaults to on)
|
||||
`--[no-]raw-summary` tells the tool to print all per-run data collected by the
|
||||
A/B test formatted with tabs for easy spreadsheet entry. (defaults to on)
|
||||
|
||||
Multiple trailing filenames can be specified and each such results file will be processed in turn.
|
||||
Multiple trailing filenames can be specified and each such results file will be
|
||||
processed in turn.
|
||||
|
||||
## Reproducing broken builds locally
|
||||
|
||||
@ -208,7 +217,7 @@ _TASK_- the name of your test that also matches the name of the
|
||||
|
||||
1. Add target to
|
||||
[.ci.yaml](https://github.com/flutter/flutter/blob/master/.ci.yaml)
|
||||
- Mirror an existing one that has the recipe `devicelab_drone`
|
||||
* Mirror an existing one that has the recipe `devicelab_drone`
|
||||
|
||||
If your test needs to run on multiple operating systems, create a separate
|
||||
target for each operating system.
|
||||
|
Loading…
Reference in New Issue
Block a user