JSNumber.toDart will now be two functions: toDartDouble and toDartInt.
There was code that did an Object.toJS. This has been changed to
use Function.toJS as well to make it consistent with the code
in flutter/packages:
0ef393811d/packages/web_benchmarks/lib/src/recorder.dart (L1223)
This is to help land this CL:
https://dart-review.googlesource.com/c/sdk/+/309082https://dart-review.googlesource.com/c/sdk/+/309081 is the CL that added
the new methods.
## Pre-launch Checklist
- [X] I read the [Contributor Guide] and followed the process outlined
there for submitting PRs.
- [X] I read the [Tree Hygiene] wiki page, which explains my
responsibilities.
- [X] I read and followed the [Flutter Style Guide], including [Features
we expect every widget to implement].
- [X] I signed the [CLA].
- [X] I listed at least one issue that this PR fixes in the description
above.
- [X] I updated/added relevant documentation (doc comments with `///`).
- [ ] I added new tests to check the change I am making, or this PR is
[test-exempt].
- [x] All existing and new tests are passing.
This enables benchmarks for the Skwasm renderer, compiled with
dart2wasm.
Platform views aren't supported in Skwasm yet, so we are skipping those
benchmarks for now.
## FlutterTimeline
Add a new class `FlutterTimeline` that's a drop-in replacement for `Timeline` from `dart:developer`. In addition to forwarding invocations of `startSync`, `finishSync`, `timeSync`, and `instantSync` to `dart:developer`, provides the following extra methods that make is easy to collect timings for code blocks on a frame-by-frame basis:
* `debugCollect()` - aggregates timings since the last reset, or since the app launched.
* `debugReset()` - forgets all data collected since the previous reset, or since the app launched. This allows clearing data from previous frames so timings can be attributed to the current frame.
* `now` - this was enhanced so that it works on the web by calling `window.performance.now` (in `Timeline` this is a noop in Dart web compilers).
* `collectionEnabled` - a field that controls whether `FlutterTimeline` stores timings in memory. By default this is disabled to avoid unexpected overhead (although the class is designed for minimal and predictable overhead). Specific benchmarks can enable collection to report to Skia Perf.
## Semantics benchmarks
Add `BenchMaterial3Semantics` that benchmarks the cost of semantics when constructing a screen full of Material 3 widgets from nothing. It is expected that semantics will have non-trivial cost in this case, but we should strive to keep it much lower than the rendering cost. This is the case already. This benchmark shows that the cost of semantics is <10%.
Add `BenchMaterial3ScrollSemantics` that benchmarks the cost of scrolling a previously constructed screen full of Material 3 widgets. The expectation should be that semantics will have trivial cost, since we're just shifting some widgets around. As of today, the numbers are not great, with semantics taking >50% of frame time, which is what prompted this PR in the first place. As we optimize this, we want to see this number improve.
- Bumps `vm_service` from `11.6.0` to `11.7.1`
- Bumps `web` from `0.1.3-beta` to `0.1.4-beta` and adds it everywhere.
- Moves `js` from `dependencies` to `dev_dependencies`
By default, the browser fuzzes the timer APIs such that they have a granularity of approximately 100 microseconds (this is due to Spectre mitigation techniques). However, many of the thing we are trying to measure actually have a much finer granularity than 100 microseconds. As a result, many of our benchmarks are extremely noisy and don't provide accurate data.
By serving the initial script files with the `Cross-Origin-Opener-Policy: same-origin` and `Cross-Origin-Embedder-Policy: require-corp` HTTP headers, the browser runs the benchmarks in a `crossOriginIsolated` context, which restores the fine granularity of APIs such as `performance.now()` to microsecond precision.
Also, we were considering anything an outlier that was more than one standard deviation away from the mean. In a normal distribution, that means we are only capturing 68% of the data and the rest are considered outliers. This is not ideal. Doing two standard deviations away captures 95% of the data, and the outliers are in the remaining 5%, which seems much more reasonable.
This updates the framework to provide higher level wrappers around ui.instantiateImageCodecWithSize(). Functionally, this doesn't change anything (other than deprecating the older loadBuffer() method in favor of loadImage()), but it sets the stage for a simpler change that will allow us to provide a more flexible way to load sized images.
#118543
* Add clipper raster cache benchmarks
* fix ci test error
* Add PhysicalModel widget to test
* Set top margin adaptive screen height
* Remove PhysicalModel