Xcode uses the CONFIGURATION_BUILD_DIR build setting to determine the location of the bundle to build and install. When launching an app via Xcode with the Xcode debug workflow (for iOS 17 physical devices), temporarily set the CONFIGURATION_BUILD_DIR to the location of the bundle so Xcode can find it.
Also, added a Xcode Debug version of the `microbenchmarks_ios` integration test since it uses `flutter run --profile` without using `--use-application-binary`.
Fixes https://github.com/flutter/flutter/issues/134186.
Support for FFI calls with `@Native external` functions through Native assets on MacOS and iOS. This enables bundling native code without any build-system boilerplate code.
For more info see:
* https://github.com/flutter/flutter/issues/129757
### Implementation details for MacOS and iOS.
Dylibs are bundled by (1) making them fat binaries if multiple architectures are targeted, (2) code signing these, and (3) copying them to the frameworks folder. These steps are done manual rather than via CocoaPods. CocoaPods would have done the same steps, but (a) needs the dylibs to be there before the `xcodebuild` invocation (we could trick it, by having a minimal dylib in the place and replace it during the build process, that works), and (b) can't deal with having no dylibs to be bundled (we'd have to bundle a dummy dylib or include some dummy C code in the build file).
The dylibs are build as a new target inside flutter assemble, as that is the moment we know what build-mode and architecture to target.
The mapping from asset id to dylib-path is passed in to every kernel compilation path. The interesting case is hot-restart where the initial kernel file is compiled by the "inner" flutter assemble, while after hot restart the "outer" flutter run compiled kernel file is pushed to the device. Both kernel files need to contain the mapping. The "inner" flutter assemble gets its mapping from the NativeAssets target which builds the native assets. The "outer" flutter run get its mapping from a dry-run invocation. Since this hot restart can be used for multiple target devices (`flutter run -d all`) it contains the mapping for all known targets.
### Example vs template
The PR includes a new template that uses the new native assets in a package and has an app importing that. Separate discussion in: https://github.com/flutter/flutter/issues/131209.
### Tests
This PR adds new tests to cover the various use cases.
* dev/devicelab/bin/tasks/native_assets_ios.dart
* Runs an example app with native assets in all build modes, doing hot reload and hot restart in debug mode.
* dev/devicelab/bin/tasks/native_assets_ios_simulator.dart
* Runs an example app with native assets, doing hot reload and hot restart.
* packages/flutter_tools/test/integration.shard/native_assets_test.dart
* Runs (incl hot reload/hot restart), builds, builds frameworks for iOS, MacOS and flutter-tester.
* packages/flutter_tools/test/general.shard/build_system/targets/native_assets_test.dart
* Unit tests the new Target in the backend.
* packages/flutter_tools/test/general.shard/ios/native_assets_test.dart
* packages/flutter_tools/test/general.shard/macos/native_assets_test.dart
* Unit tests the native assets being packaged on a iOS/MacOS build.
It also extends various existing tests:
* dev/devicelab/bin/tasks/module_test_ios.dart
* Exercises the add2app scenario.
* packages/flutter_tools/test/general.shard/features_test.dart
* Unit test the new feature flag.
Enable Impeller benchmarks for drawAtlas/drawVertices on iOS/Metal, Android/GLES, and Android/Vulkan.
Enable impeller tessellation benchmarks on iOS/Metal and Android/Vulkan - not GLES as this is measuring backend agnostic performance.
This PR includes the following changes. These changes only apply to iOS 17 physical devices.
| Command | Change Description | Changes to User Experience |
| ------------- | ------------- | ------------- |
| `flutter run --release` | Uses `devicectl` to install and launch application in release mode. | No change. |
| `flutter run` | Uses Xcode via automation scripting to run application in debug and profile mode. | Xcode will be opened in the background. Errors/crashes may be caught in Xcode and therefore may not show in terminal. |
| `flutter run --use-application-binary=xxxx` | Creates temporary empty Xcode project and use Xcode to run via automation scripting in debug and profile. | Xcode will be opened in the background. Errors/crashes may be caught in Xcode and therefore may not show in terminal. |
| `flutter install` | Uses `devicectl` to check installed apps, install app, uninstall app. | No change. |
| `flutter screenshot` | Will return error. | Will return error. |
Other changes include:
* Using `devicectl` to get information about the device
* Using `idevicesyslog` and Dart VM logging for device logs
Note:
Xcode automation scripting (used in `flutter run` for debug and profile) does not work in a headless (without a UI) interface. No known workaround.
Fixes https://github.com/flutter/flutter/issues/128827, https://github.com/flutter/flutter/issues/128531.
Because the cost of type checks dominate our dart2wasm benchmarks, we've
decided to pass `--omit-type-checks` for now.
This was previously reverted because the skwasm benchmarks were broken
in general for a separate reason, and my getting rid of `bringup: true`
broke the tree. I ended up fixing the benchmarks and getting rid of
`bringup: true` in a separate commit, so this just adds the flag only.
We've decided to use the `--omit-type-checks` flag for our dart2wasm benchmarks. Right now, many of the benchmark results are dominated by type checks and most of what we are actually trying to measure get drowned out in the noise.
Adding debugging for https://github.com/flutter/flutter/issues/129836.
Takes a screenshot when startup test takes too long (10 minutes).
Also, removes some old debugging and add new debugging message.
This enables benchmarks for the Skwasm renderer, compiled with
dart2wasm.
Platform views aren't supported in Skwasm yet, so we are skipping those
benchmarks for now.
By default, the browser fuzzes the timer APIs such that they have a granularity of approximately 100 microseconds (this is due to Spectre mitigation techniques). However, many of the thing we are trying to measure actually have a much finer granularity than 100 microseconds. As a result, many of our benchmarks are extremely noisy and don't provide accurate data.
By serving the initial script files with the `Cross-Origin-Opener-Policy: same-origin` and `Cross-Origin-Embedder-Policy: require-corp` HTTP headers, the browser runs the benchmarks in a `crossOriginIsolated` context, which restores the fine granularity of APIs such as `performance.now()` to microsecond precision.
Also, we were considering anything an outlier that was more than one standard deviation away from the mean. In a normal distribution, that means we are only capturing 68% of the data and the rest are considered outliers. This is not ideal. Doing two standard deviations away captures 95% of the data, and the outliers are in the remaining 5%, which seems much more reasonable.
I think the flake is due to setclipboard or semantics update race condition. I migrated the test to use integration test package which relies less on timing
fixes https://github.com/flutter/flutter/issues/124636