docs: rework development guide

Currently, each individual plugin README documents roughly the same
daily development steps to git clone, build, and deploy. Re-purpose
the plugin READMEs more towards cluster admin type of documentation
and start moving all development related documentation to DEVEL.md.

The same is true for e2e testing documentation which is scattered
in places where they don't belong to. Having all day-to-day
development Howtos is good to have in a centralized place.

Finally, the cleanup includes some harmonization to plugins'
table of contents which now follows the pattern:

* [Introduction](#introduction)
(* [Modes and Configuration Options](#modes-and-configuration-options))
* [Installation](#installation)
    (* [Prerequisites](#prerequisites))
    * [Pre-built Images](#pre-built-images)
    * [Verify Plugin Registration](#verify-plugin-registration)
* [Testing and Demos](#testing-and-demos)
    * ...

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
This commit is contained in:
Mikko Ylinen 2022-08-19 10:31:55 +03:00
parent ba998964ce
commit 1b3accacc2
11 changed files with 425 additions and 1001 deletions

399
DEVEL.md
View File

@ -1,7 +1,253 @@
# Development
# Instructions for Device Plugin Development and Maintenance
Table of Contents
## How to develop simple device plugins
* [Day-to-day Development How to's](#day-to-day-development)
* [Get the Source Code](#get-the-source-code)
* [Build and Run Plugin Binaries](#build-and-run-plugin-binaries)
* [Build Container Images](#build-container-images)
* [Build Against a Newer Version of Kubernetes](#build-against-a-newer-version-of-kubernetes)
* [Work with Intel Device Plugins operator modifications](#work-with-intel-device-plugins-operator-modifications)
* [Publish a New Version of the Intel Device Plugins Operator to operatorhub.io](#publish-a-new-version-of-the-intel-device-plugins-operator-to-operatorhub.io)
* [Run E2E Tests](#run-e2e-tests)
* [Run Controller Tests with a Local Control Plane](#run-controller-tests-with-a-local-control-plane)
* [How to Develop Simple Device Plugins](#how-to-develop-simple-device-plugins)
* [Logging](#logging)
* [Error conventions](#error-conventions)
* [Checklist for new device plugins](#checklist-for-new-device-plugins)
## Day-to-day Development How to's
### Get the Source Code
With `git` installed on the system, just clone the repository:
```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
```
### Build and Run Plugin Binaries
With `go` development environment installed on the system, build the plugin:
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make <plugin-build-target>
```
**Note:** All the available plugin build targets is roughly the output of `ls ${INTEL_DEVICE_PLUGINS_SRC}/cmd`.
To test the plugin binary on the development system, run as administrator:
```bash
$ sudo -E ${INTEL_DEVICE_PLUGINS_SRC}/cmd/<plugin-build-target>/<plugin-build-target>
```
### Build Container Images
The dockerfiles are generated on the fly from `.in` suffixed files and `.docker` include-snippets which are stitched together with
cpp preprocessor. You need to install cpp for that, e.g. in ubuntu it is found from build-essential (sudo apt install build-essential).
Don't edit the generated dockerfiles. Edit the inputs.
The simplest way to build all the docker images, is:
```
$ make images
```
But it is very slow. You can drastically speed it up by first running once:
```
$ make vendor
```
Which brings the libraries into the builder container without downloading them again and again for each plugin.
But it is still slow. You can further speed it up by first running once:
```
$ make licenses
```
Which pre-creates the go-licenses for all plugins, instead of re-creating them for each built plugin, every time.
But it is still rather slow to build all the images, and unnecessary, if you iterate on just one. Instead, build just the one you are iterating on, example:
```
$ make <image-build-target>
```
**Note:** All the available image build targets is roughly the output of `ls ${INTEL_DEVICE_PLUGINS_SRC}/build/docker/*.Dockerfile`.
If you iterate on only one plugin and if you know what its target cmd is (see folder `cmd/`), you can opt to pre-create just its licenses, example:
```
$ make licenses/<plugin-build-target>
```
The container image target names in the Makefile are derived from the `.Dockerfile.in` suffixed filenames under folder `build/docker/templates/`.
Recap:
```
$ make vendor
$ make licenses (or just make licenses/<plugin-build-target>)
$ make <image-build-target>
```
Repeat the last step only, unless you change library dependencies. If you pull in new sources, start again from `make vendor`.
**Note:** The image build tool can be changed from the default `docker` by setting the `BUILDER` argument
to the [`Makefile`](Makefile): `make <image-build-target> BUILDER=buildah`.
### Build Against a Newer Version of Kubernetes
First you need to update module dependencies. The easiest way is to use the
script copied from https://github.com/kubernetes/kubernetes/issues/79384#issuecomment-521493597:
```bash
#!/bin/sh
set -euo pipefail
VERSION=${1#"v"}
if [ -z "$VERSION" ]; then
echo "Must specify version!"
exit 1
fi
MODS=($(
curl -sS https://raw.githubusercontent.com/kubernetes/kubernetes/v${VERSION}/go.mod |
sed -n 's|.*k8s.io/\(.*\) => ./staging/src/k8s.io/.*|k8s.io/\1|p'
))
for MOD in "${MODS[@]}"; do
V=$(
go mod download -json "${MOD}@kubernetes-${VERSION}" |
sed -n 's|.*"Version": "\(.*\)".*|\1|p'
)
go mod edit "-replace=${MOD}=${MOD}@${V}"
done
go get "k8s.io/kubernetes@v${VERSION}"
```
Just run it inside the repo's root, e.g.
```
$ ./k8s_gomod_update.sh 1.18.1
```
Finally run
```
$ make generate
$ make test
```
and fix all new compilation issues.
### Work with Intel Device Plugins Operator Modifications
There are few useful steps when working with changes to Device Plugins CRDs and controllers:
1. Install controller-gen: `GO111MODULE=on go get -u sigs.k8s.io/controller-tools/cmd/controller-gen@<release ver>, e.g, v0.4.1`
2. Generate CRD and Webhook artifacts: `make generate`
3. Test local changes using [envtest](https://book.kubebuilder.io/reference/envtest.html): `make envtest`
4. Build a custom operator image: `make intel-deviceplugin-operator`
5. (Un)deploy operator: `kubectl [apply|delete] -k deployments/operator/default`
### Publish a New Version of the Intel Device Plugins Operator to operatorhub.io
Update metadata.annotations.containerImage and metadata.annotations.createdAT fields in the base CSV manifest file
deployments/operator/manifests/bases/intel-device-plugins-operator.clusterserviceversion.yaml
to match current operator version and current date
Fork the [Community Operators](https://github.com/k8s-operatorhub/community-operators) repo and clone it:
```
$ git clone https://github.com/<GitHub Username>/community-operators
```
Generate bundle and build bundle image:
```
$ make bundle OPERATOR_VERSION=0.X.Y CHANNELS=alpha DEFAULT_CHANNEL=alpha
$ make bundle-build
```
> **Note**: You need to push the image to a registry if you want to follow the verification process below.
Verify the operator deployment works OK via OLM in your development cluster:
```
$ operator-sdk olm install
$ kubectl create namespace testoperator
$ operator-sdk run bundle <Registry>/<Tag> -n testoperator --use-http
# do verification checks
...
# do clean up
$ operator-sdk cleanup intel-device-plugins-operator --namespace testoperator
$ kubectl delete namespace testoperator
$ operator-sdk olm uninstall
```
Review the package manifests by uploading the generated `packagemanifests` folder to
https://operatorhub.io -> Contribute -> Package Your Operator.
Commit files
```
$ cd community-operators
$ git add operators/intel-device-plugins-operator/0.X.Y
$ git commit -am 'operators intel-device-plugins-operator (0.X.Y)' -S
```
Submit a PR
Check operator page
https://operatorhub.io/operator/intel-device-plugins-operator
after PR is merged
### Run E2E Tests
Currently the E2E tests require having a Kubernetes cluster already configured
on the nodes with the hardware required by the device plugins. Also all the
container images with the executables under test must be available in the
cluster. If these two conditions are satisfied, run the tests with:
```bash
$ go test -v ./test/e2e/...
```
In case you want to run only certain tests, e.g., QAT ones, run:
```bash
$ go test -v ./test/e2e/... -args -ginkgo.focus "QAT"
```
If you need to specify paths to your custom `kubeconfig` containing
embedded authentication info then add the `-kubeconfig` argument:
```bash
$ go test -v ./test/e2e/... -args -kubeconfig /path/to/kubeconfig
```
The full list of available options can be obtained with:
```bash
$ go test ./test/e2e/... -args -help
```
It is also possible to run the tests which don't depend on hardware
without a pre-configured Kubernetes cluster. Just make sure you have
[Kind](https://kind.sigs.k8s.io/) installed on your host and run:
```
$ make test-with-kind
```
### Run Controller Tests with a Local Control Plane
The controller-runtime library provides a package for integration testing by
starting a local control plane. The package is called
[envtest](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest). The
operator uses this package for its integration testing.
For setting up the environment for testing, `setup-envtest` can be used:
```bash
$ go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
$ setup-envtest use <K8S_VERSION>
$ KUBEBUILDER_ASSETS=$(setup-envtest use -i -p path <K8S_VERSION>) make envtest
```
## How to Develop Simple Device Plugins
To create a simple device plugin without the hassle of developing your own gRPC
server, you can use a package included in this repository called
@ -129,7 +375,7 @@ Otherwise, they can be logged as simple values:
klog.Warningf("Example of a warning due to an external error: %v", err)
```
### Checklist for new device plugins
## Checklist for new device plugins
For new device plugins contributed to this repository, below is a
checklist to get the plugin on par feature and quality wise with
@ -143,150 +389,3 @@ others:
6. Plugin CRD validation tests implemented in [`test/envtest/`](test/envtest) and passing: `make envtest`.
7. Plugin CRD controller implemented in [`pkg/controllers/`](pkg/controllers) and added to the manager in `cmd/operator/main.go`.
8. Plugin documentation written `cmd/<plugin>/README.md` and optionally end to end demos created in [`demo`](demo).
## How to build against a newer version of Kubernetes
First you need to update module dependencies. The easiest way is to use the
script copied from https://github.com/kubernetes/kubernetes/issues/79384#issuecomment-521493597:
```bash
#!/bin/sh
set -euo pipefail
VERSION=${1#"v"}
if [ -z "$VERSION" ]; then
echo "Must specify version!"
exit 1
fi
MODS=($(
curl -sS https://raw.githubusercontent.com/kubernetes/kubernetes/v${VERSION}/go.mod |
sed -n 's|.*k8s.io/\(.*\) => ./staging/src/k8s.io/.*|k8s.io/\1|p'
))
for MOD in "${MODS[@]}"; do
V=$(
go mod download -json "${MOD}@kubernetes-${VERSION}" |
sed -n 's|.*"Version": "\(.*\)".*|\1|p'
)
go mod edit "-replace=${MOD}=${MOD}@${V}"
done
go get "k8s.io/kubernetes@v${VERSION}"
```
Just run it inside the repo's root, e.g.
```
$ ./k8s_gomod_update.sh 1.18.1
```
Finally run
```
$ make generate
$ make test
```
and fix all new compilation issues.
## How to build docker images
The dockerfiles are generated on the fly from `.in` suffixed files and `.docker` include-snippets which are stitched together with
cpp preprocessor. You need to install cpp for that, e.g. in ubuntu it is found from build-essential (sudo apt install build-essential).
Don't edit the generated dockerfiles. Edit the inputs.
The simplest way to build all the docker images, is:
```
$ make images
```
But it is very slow. You can drastically speed it up by first running once:
```
$ make vendor
```
Which brings the libraries into the builder container without downloading them again and again for each plugin.
But it is still slow. You can further speed it up by first running once:
```
$ make licenses
```
Which pre-creates the go-licenses for all plugins, instead of re-creating them for each built plugin, every time.
But it is still rather slow to build all the images, and unnecessary, if you iterate on just one. Instead, build just the one you are iterating on, example:
```
$ make intel-gpu-plugin
```
If you iterate on only one plugin and if you know what its target cmd is (see folder `cmd/`), you can opt to pre-create just its licenses, example:
```
$ make licenses/gpu_plugin
```
The docker image target names in the Makefile are derived from the `.Dockerfile.in` suffixed filenames under folder `build/docker/templates/`.
Recap:
```
$ make vendor
$ make licenses (or just make licenses/gpu_plugin)
$ make intel-gpu-plugin
```
Repeat the last step only, unless you change library dependencies. If you pull in new sources, start again from `make vendor`.
## How to work with Intel Device Plugins operator modifications
There are few useful steps when working with changes to Device Plugins CRDs and controllers:
1. Install controller-gen: `GO111MODULE=on go get -u sigs.k8s.io/controller-tools/cmd/controller-gen@<release ver>, e.g, v0.4.1`
2. Generate CRD and Webhook artifacts: `make generate`
3. Test local changes using [envtest](https://book.kubebuilder.io/reference/envtest.html): `make envtest`
4. Build a custom operator image: `make intel-deviceplugin-operator`
5. (Un)deploy operator: `kubectl [apply|delete] -k deployments/operator/default`
## How to publish a new version of the Intel Device Plugins operator to operatorhub.io
Update metadata.annotations.containerImage and metadata.annotations.createdAT fields in the base CSV manifest file
deployments/operator/manifests/bases/intel-device-plugins-operator.clusterserviceversion.yaml
to match current operator version and current date
Fork the [Community Operators](https://github.com/k8s-operatorhub/community-operators) repo and clone it:
```
$ git clone https://github.com/<GitHub Username>/community-operators
```
Generate bundle and build bundle image:
```
$ make bundle OPERATOR_VERSION=0.X.Y CHANNELS=alpha DEFAULT_CHANNEL=alpha
$ make bundle-build
```
> **Note**: You need to push the image to a registry if you want to follow the verification process below.
Verify the operator deployment works OK via OLM in your development cluster:
```
$ operator-sdk olm install
$ kubectl create namespace testoperator
$ operator-sdk run bundle <Registry>/<Tag> -n testoperator --use-http
# do verification checks
...
# do clean up
$ operator-sdk cleanup intel-device-plugins-operator --namespace testoperator
$ kubectl delete namespace testoperator
$ operator-sdk olm uninstall
```
Review the package manifests by uploading the generated `packagemanifests` folder to
https://operatorhub.io -> Contribute -> Package Your Operator.
Commit files
```
$ cd community-operators
$ git add operators/intel-device-plugins-operator/0.X.Y
$ git commit -am 'operators intel-device-plugins-operator (0.X.Y)' -S
```
Submit a PR
Check operator page
https://operatorhub.io/operator/intel-device-plugins-operator
after PR is merged

View File

@ -26,7 +26,6 @@ Table of Contents
* [Demos](#demos)
* [Workload Authors](#workload-authors)
* [Developers](#developers)
* [Running e2e Tests](#running-e2e-tests)
* [Supported Kubernetes versions](#supported-kubernetes-versions)
* [Pre-built plugin images](#pre-built-plugin-images)
* [License](#license)
@ -38,7 +37,7 @@ Table of Contents
Prerequisites for building and running these device plugins include:
- Appropriate hardware
- Appropriate hardware and drivers
- A fully configured [Kubernetes cluster]
- A working [Go environment], of at least version v1.16.
@ -249,64 +248,8 @@ The summary of resources available via plugins in this repository is given in th
## Developers
For information on how to develop a new plugin using the framework, see the
[Developers Guide](DEVEL.md) and the code in the
[device plugins pkg directory](pkg/deviceplugin).
## Running E2E Tests
Currently the E2E tests require having a Kubernetes cluster already configured
on the nodes with the hardware required by the device plugins. Also all the
container images with the executables under test must be available in the
cluster. If these two conditions are satisfied, run the tests with:
```bash
$ go test -v ./test/e2e/...
```
In case you want to run only certain tests, e.g., QAT ones, run:
```bash
$ go test -v ./test/e2e/... -args -ginkgo.focus "QAT"
```
If you need to specify paths to your custom `kubeconfig` containing
embedded authentication info then add the `-kubeconfig` argument:
```bash
$ go test -v ./test/e2e/... -args -kubeconfig /path/to/kubeconfig
```
The full list of available options can be obtained with:
```bash
$ go test ./test/e2e/... -args -help
```
It is possible to run the tests which don't depend on hardware
without a pre-configured Kubernetes cluster. Just make sure you have
[Kind](https://kind.sigs.k8s.io/) installed on your host and run:
```
$ make test-with-kind
```
## Running Controller Tests with a Local Control Plane
The controller-runtime library provides a package for integration testing by
starting a local control plane. The package is called
[envtest](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest). The
operator uses this package for its integration testing.
Please have a look at `envtest`'s documentation to set up it properly. But basically
you just need to have `etcd` and `kube-apiserver` binaries available on your
host. By default they are expected to be located at `/usr/local/kubebuilder/bin`.
But you can have it stored anywhere by setting the `KUBEBUILDER_ASSETS`
environment variable. If you have the binaries copied to
`${HOME}/work/kubebuilder-assets`, run the tests:
```bash
$ KUBEBUILDER_ASSETS=${HOME}/work/kubebuilder-assets make envtest
```
For information on how to develop a new plugin using the framework or work on development task in
this repository, see the [Developers Guide](DEVEL.md).
## Supported Kubernetes Versions

View File

@ -4,16 +4,9 @@ Table of Contents
* [Introduction](#introduction)
* [Installation](#installation)
* [Deploy with pre-built container image](#deploy-with-pre-built-container-image)
* [Getting the source code](#getting-the-source-code)
* [Deploying as a DaemonSet](#deploying-as-a-daemonset)
* [Build the plugin image](#build-the-plugin-image)
* [Deploy plugin DaemonSet](#deploy-plugin-daemonset)
* [Deploy by hand](#deploy-by-hand)
* [Build the plugin](#build-the-plugin)
* [Run the plugin as administrator](#run-the-plugin-as-administrator)
* [Verify plugin registration](#verify-plugin-registration)
* [Testing the plugin](#testing-the-plugin)
* [Pre-built Images](#pre-built-images)
* [Verify Plugin Registration](#verify-plugin-registration)
* [Testing and Demos](#testing-and-demos)
## Introduction
@ -138,7 +131,7 @@ The following sections detail how to obtain, build, deploy and test the DLB devi
Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis.
### Deploy with pre-built container image
### Pre-built Images
[Pre-built images](https://hub.docker.com/r/intel/intel-dlb-plugin)
of this component are available on the Docker hub. These images are automatically built and uploaded
@ -149,74 +142,15 @@ release version numbers in the format `x.y.z`, corresponding to the branches and
repository. Thus the easiest way to deploy the plugin in your cluster is to run this command
```bash
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/dlb_plugin?ref=<REF>
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/dlb_plugin?ref=<RELEASE_VERSION>
daemonset.apps/intel-dlb-plugin created
```
Where `<REF>` needs to be substituted with the desired git ref, e.g. `main`.
Where `<RELEASE_VERSION>` needs to be substituted with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
Nothing else is needed. But if you want to deploy a customized version of the plugin read further.
Nothing else is needed. See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the plugin.
### Getting the source code
```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
```
### Deploying as a DaemonSet
To deploy the dlb plugin as a daemonset, you first need to build a container image for the
plugin and ensure that is visible to your nodes.
#### Build the plugin image
The following will use `docker` to build a local container image called
`intel/intel-dlb-plugin` with the tag `devel`.
The image build tool can be changed from the default `docker` by setting the `BUILDER` argument
to the [`Makefile`](Makefile).
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make intel-dlb-plugin
...
Successfully tagged intel/intel-dlb-plugin:devel
```
#### Deploy plugin DaemonSet
You can then use the [example DaemonSet YAML](/deployments/dlb_plugin/base/intel-dlb-plugin.yaml)
file provided to deploy the plugin. The default kustomization that deploys the YAML as is:
```bash
$ kubectl apply -k deployments/dlb_plugin
daemonset.apps/intel-dlb-plugin created
```
### Deploy by hand
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
In this case, you do not need to build the complete container image, and can build just the plugin.
#### Build the plugin
First we build the plugin:
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make dlb_plugin
```
#### Run the plugin as administrator
Now we can run the plugin directly on the node:
```bash
$ sudo -E ${INTEL_DEVICE_PLUGINS_SRC}/cmd/dlb_plugin/dlb_plugin
```
### Verify plugin registration
### Verify Plugin Registration
You can verify the plugin has been registered with the expected nodes by searching for the relevant
resource allocation status on the nodes:
@ -228,7 +162,7 @@ master
dlb.intel.com/vf: 4
```
### Testing the plugin
## Testing and Demos
We can test the plugin is working by deploying the provided example test images (dlb-libdlb-demo and dlb-dpdk-demo).

View File

@ -4,16 +4,9 @@ Table of Contents
* [Introduction](#introduction)
* [Installation](#installation)
* [Deploy with pre-built container image](#deploy-with-pre-built-container-image)
* [Getting the source code](#getting-the-source-code)
* [Deploying as a DaemonSet](#deploying-as-a-daemonset)
* [Build the plugin image](#build-the-plugin-image)
* [Deploy plugin DaemonSet](#deploy-plugin-daemonset)
* [Deploy by hand](#deploy-by-hand)
* [Build the plugin](#build-the-plugin)
* [Run the plugin as administrator](#run-the-plugin-as-administrator)
* [Verify plugin registration](#verify-plugin-registration)
* [Testing the plugin](#testing-the-plugin)
* [Pre-built Images](#pre-built-images)
* [Verify Plugin Registration](#verify-plugin-registration)
* [Testing and Demos](#testing-and-demos)
## Introduction
@ -25,11 +18,9 @@ The DSA plugin and operator optionally support provisioning of DSA devices and w
## Installation
The following sections detail how to obtain, build, deploy and test the DSA device plugin.
The following sections detail how to use the DSA device plugin.
Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis.
### Deploy with pre-built container image
### Pre-built Images
[Pre-built images](https://hub.docker.com/r/intel/intel-dsa-plugin)
of this component are available on the Docker hub. These images are automatically built and uploaded
@ -40,17 +31,17 @@ release version numbers in the format `x.y.z`, corresponding to the branches and
repository. Thus the easiest way to deploy the plugin in your cluster is to run this command
```bash
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/dsa_plugin?ref=<REF>
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/dsa_plugin?ref=<RELEASE_VERSION>
daemonset.apps/intel-dsa-plugin created
```
Where `<REF>` needs to be substituted with the desired git ref, e.g. `main`.
Where `<RELEASE_VERSION>` needs to be substituted with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
Nothing else is needed. But if you want to deploy a customized version of the plugin read further.
Nothing else is needed. See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the plugin.
### Deploy with initcontainer
#### Automatic Provisioning
There's a sample [DSA initcontainer](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/build/docker/intel-idxd-config-initcontainer.Dockerfile) included that provisions DSA devices and workqueues (1 engine / 1 group / 1 wq (user/dedicated)), to deploy:
There's a sample [idxd initcontainer](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/build/docker/intel-idxd-config-initcontainer.Dockerfile) included that provisions DSA devices and workqueues (1 engine / 1 group / 1 wq (user/dedicated)), to deploy:
```bash
$ kubectl apply -k deployments/dsa_plugin/overlays/dsa_initcontainer/
@ -58,8 +49,6 @@ $ kubectl apply -k deployments/dsa_plugin/overlays/dsa_initcontainer/
The provisioning [script](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/demo/idxd-init.sh) and [template](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/master/demo/dsa.conf) are available for customization.
### Deploy with initcontainer and provisioning config in the ConfigMap
The provisioning config can be optionally stored in the ProvisioningConfig configMap which is then passed to initcontainer through the volume mount.
There's also a possibility for a node specific congfiguration through passing a nodename via NODE_NAME into initcontainer's environment and passing a node specific profile via configMap volume mount.
@ -70,68 +59,7 @@ To create a custom provisioning config:
$ kubectl create configmap --namespace=inteldeviceplugins-system intel-dsa-config --from-file=demo/dsa.conf
```
### Getting the source code
```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
```
### Deploying as a DaemonSet
To deploy the dsa plugin as a daemonset, you first need to build a container image for the
plugin and ensure that is visible to your nodes.
#### Build the plugin image
The following will use `docker` to build a local container image called
`intel/intel-dsa-plugin` with the tag `devel`.
The image build tool can be changed from the default `docker` by setting the `BUILDER` argument
to the [`Makefile`](Makefile).
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make intel-dsa-plugin
...
Successfully tagged intel/intel-dsa-plugin:devel
```
#### Deploy plugin DaemonSet
You can then use the [example DaemonSet YAML](/deployments/dsa_plugin/base/intel-dsa-plugin.yaml)
file provided to deploy the plugin. The default kustomization that deploys the YAML as is:
```bash
$ kubectl apply -k deployments/dsa_plugin
daemonset.apps/intel-dsa-plugin created
```
### Deploy by hand
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
In this case, you do not need to build the complete container image, and can build just the plugin.
#### Build the plugin
First we build the plugin:
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make dsa_plugin
```
#### Run the plugin as administrator
Now we can run the plugin directly on the node:
```bash
$ sudo -E ${INTEL_DEVICE_PLUGINS_SRC}/cmd/dsa_plugin/dsa_plugin
device-plugin registered
```
### Verify plugin registration
### Verify Plugin Registration
You can verify the plugin has been registered with the expected nodes by searching for the relevant
resource allocation status on the nodes:
@ -145,7 +73,7 @@ node1
dsa.intel.com/wq-user-shared: 20
```
### Testing the plugin
## Testing and Demos
We can test the plugin is working by deploying the provided example accel-config test image.

View File

@ -4,9 +4,6 @@ Table of Contents
* [Introduction](#introduction)
* [Dependencies](#dependencies)
* [Building](#building)
* [Getting the source code](#getting-the-source-code)
* [Building the image](#building-the-image)
* [Configuring CRI-O](#configuring-cri-o)
## Introduction
@ -40,26 +37,7 @@ install the following:
All components have the same basic dependencies as the
[generic plugin framework dependencies](../../README.md#about)
## Building
The following sections detail how to obtain, build and deploy the CRI-O
prestart hook.
### Getting the source code
```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
```
### Building the image
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make intel-fpga-initcontainer
...
Successfully tagged intel/intel-fpga-initcontainer:devel
```
See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the CRI hook.
## Configuring CRI-O
@ -68,4 +46,4 @@ file that prevents CRI-O to discover and configure hooks automatically.
For FPGA orchestration programmed mode, the OCI hooks are the key component.
Please ensure that your `/etc/crio/crio.conf` parameter `hooks_dir` is either unset
(to enable default search paths for OCI hooks configuration) or contains the directory
`/etc/containers/oci/hooks.d`.
`/etc/containers/oci/hooks.d`.

View File

@ -3,19 +3,12 @@
Table of Contents
* [Introduction](#introduction)
* [Component overview](#component-overview)
* [FPGA modes](#fpga-modes)
* [Component overview](#component-overview)
* [Modes and Configuration Options](#modes-and-configuration-options)
* [Installation](#installation)
* [Pre-built images](#pre-built-images)
* [Dependencies](#dependencies)
* [Getting the source code](#getting-the-source-code)
* [Deploying as a DaemonSet](#deploying-as-a-daemonset)
* [Verify plugin registration](#verify-plugin-registration)
* [Building the plugin image](#building-the-plugin-image)
* [Deploy by hand](#deploy-by-hand)
* [Build FPGA device plugin](#build-fpga-device-plugin)
* [Run FPGA device plugin in af mode](#run-fpga-device-plugin-in-af-mode)
* [Run FPGA device plugin in region mode](#run-fpga-device-plugin-in-region-mode)
* [Prerequisites](#prerequisites)
* [Pre-built Images](#pre-built-images)
* [Verify Plugin Registration](#verify-plugin-registration)
## Introduction
@ -37,7 +30,7 @@ The components together implement the following features:
- orchestration of FPGA programming
- access control for FPGA hardware
## Component overview
### Component overview
The following components are part of this repository, and work together to support Intel FPGAs under
Kubernetes:
@ -70,7 +63,7 @@ Kubernetes:
The repository also contains an [FPGA helper tool](../fpga_tool/README.md) that may be useful during
development, initial deployment and debugging.
## FPGA modes
### Modes and Configuration options
The FPGA plugin set can run in one of two modes:
@ -97,33 +90,9 @@ af mode:
## Installation
The below sections cover how to obtain, build and install this component.
The below sections cover how to use this component.
Components can generally be installed either using DaemonSets or running them
'by hand' on each node.
### Pre-built images
Pre-built images of the components are available on the [Docker hub](https://hub.docker.com/u/intel).
These images are automatically built and uploaded to the hub from the latest `main` branch of
this repository.
Release tagged images of the components are also available on the Docker hub, tagged with their
release version numbers (of the form `x.y.z`, matching the branch/tag release number in this repo).
The deployment YAML files supplied with these components in this repository use the images with the
`devel` tag by default. If you do not build your own local images, then your Kubernetes cluster may
pull down the `devel` images from the Docker hub by default.
To use the release tagged versions of the images, edit the YAML deployment files appropriately.
The following images are available on the Docker hub:
- [The FPGA plugin](https://hub.docker.com/r/intel/intel-fpga-plugin)
- [The FPGA admisson webhook](https://hub.docker.com/r/intel/intel-fpga-admissionwebhook)
- [The FPGA CRI-O prestart hook (in the `initcontainer` image)](https://hub.docker.com/r/intel/intel-fpga-initcontainer)
### Dependencies
### Prerequisites
All components have the same basic dependencies as the
[generic plugin framework dependencies](../../README.md#about)
@ -153,18 +122,6 @@ which is present and thus to use:
Install this component (FPGA device plugin) first, and then follow the links
and instructions to install the other components.
### Getting the source code
To obtain the YAML files used for deployment, or to obtain the source tree if you intend to
do a hand-deployment or build your own image, you will require access to the source code:
```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
```
### Deploying as a DaemonSet
The FPGA webhook deployment depends on having [cert-manager](https://cert-manager.io/)
installed. See its installation instructions [here](https://cert-manager.io/docs/installation/kubectl/).
@ -177,9 +134,24 @@ cert-manager-webhook-64dc9fff44-29cfc 1/1 Running 0 1m
```
### Pre-built Images
Pre-built images of the components are available on the [Docker hub](https://hub.docker.com/u/intel).
These images are automatically built and uploaded to the hub from the latest `main` branch of
this repository.
Release tagged images of the components are also available on the Docker hub, tagged with their
release version numbers (of the form `x.y.z`, matching the branch/tag release number in this repo).
The following images are available on the Docker hub:
- [The FPGA plugin](https://hub.docker.com/r/intel/intel-fpga-plugin)
- [The FPGA admisson webhook](https://hub.docker.com/r/intel/intel-fpga-admissionwebhook)
- [The FPGA CRI-O prestart hook (in the `initcontainer` image)](https://hub.docker.com/r/intel/intel-fpga-initcontainer)
Depending on the FPGA mode, run either
```bash
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/fpga_plugin/overlays/af
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/fpga_plugin/overlays/af?ref=<RELEASE_VERSION>
namespace/intelfpgaplugin-system created
customresourcedefinition.apiextensions.k8s.io/acceleratorfunctions.fpga.intel.com created
customresourcedefinition.apiextensions.k8s.io/fpgaregions.fpga.intel.com created
@ -196,7 +168,7 @@ issuer.cert-manager.io/intelfpgaplugin-selfsigned-issuer created
```
or
```bash
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/fpga_plugin/overlays/region
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/fpga_plugin/overlays/region?ref=<RELEASE_VERSION>
namespace/intelfpgaplugin-system created
customresourcedefinition.apiextensions.k8s.io/acceleratorfunctions.fpga.intel.com created
customresourcedefinition.apiextensions.k8s.io/fpgaregions.fpga.intel.com created
@ -211,6 +183,9 @@ daemonset.apps/intelfpgaplugin-fpgadeviceplugin created
certificate.cert-manager.io/intelfpgaplugin-serving-cert created
issuer.cert-manager.io/intelfpgaplugin-selfsigned-issuer created
```
Where `<RELEASE_VERSION>` needs to be substituted with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
The command should result in two pods running:
```bash
$ kubectl get pods -n intelfpgaplugin-system
@ -219,12 +194,6 @@ intelfpgaplugin-fpgadeviceplugin-skcw5 1/1 Running 0 57s
intelfpgaplugin-webhook-7d6bcb8b57-k52b9 1/1 Running 0 57s
```
If you intend to deploy your own image, you will need to reference the
[image build section](#build-the-plugin-image) first.
If you do not want to deploy the `devel` or release tagged image, you will need to create your
own kustomization overlay referencing your required image.
If you need the FPGA plugin on some nodes to operate in a different mode then add this
annotation to the nodes:
@ -241,7 +210,7 @@ And restart the pods on the nodes.
> also deploys the [FPGA CRI-O hook](../fpga_crihook/README.md) `initcontainer` image, but it will be
> benign (un-used) when running the FPGA plugin in `af` mode.
#### Verify plugin registration
#### Verify Plugin Registration
Verify the FPGA plugin has been deployed on the nodes. The below shows the output
you can expect in `region` mode, but similar output should be expected for `af`
@ -253,76 +222,7 @@ fpga.intel.com/region-ce48969398f05f33946d560708be108a: 1
fpga.intel.com/region-ce48969398f05f33946d560708be108a: 1
```
#### Building the plugin image
If you need to build your own image from sources, and are not using the images
available on the Docker Hub, follow the below details.
> **Note:** The FPGA plugin [DaemonSet YAML](/deployments/fpga_plugin/fpga_plugin.yaml)
> also deploys the [FPGA CRI-O hook](../fpga_crihook/README.md) `initcontainer` image as well. You may
> also wish to build that image locally before deploying the FPGA plugin to avoid deploying
> the Docker hub default image.
The following will use `docker` to build a local container image called
`intel/intel-fpga-plugin` with the tag `devel`.
The image build tool can be changed from the default docker by setting the `BUILDER` argument
to the [Makefile](/Makefile).
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make intel-fpga-plugin
...
Successfully tagged intel/intel-fpga-plugin:devel
```
This image launches `fpga_plugin` in `af` mode by default.
To use your own container image, create you own kustomization overlay patching
[`deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml`](/deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml)
file.
### Deploy by hand
For development purposes, it is sometimes convenient to deploy the plugin 'by hand'
on a node. In this case, you do not need to build the complete container image,
and can build just the plugin.
> **Note:** The FPGA plugin has a number of other associated items that may also need
> to be configured or installed. It is recommended you reference the actions of the
> DaemonSet YAML deployment for more details.
#### Build FPGA device plugin
When deploying by hand, you only need to build the plugin itself, and not the whole
container image:
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make fpga_plugin
```
#### Run FPGA device plugin in af mode
```bash
$ export KUBE_CONF=/var/run/kubernetes/admin.kubeconfig # path to kubeconfig with admin's credentials
$ export NODE_NAME="<node name>" # if the node's name was overridden and differs from hostname
$ sudo -E ${INTEL_DEVICE_PLUGINS_SRC}/cmd/fpga_plugin/fpga_plugin -mode af -kubeconfig $KUBE_CONF
FPGA device plugin started in af mode
device-plugin start server at: /var/lib/kubelet/device-plugins/fpga.intel.com-af-f7df405cbd7acf7222f144b0b93acd18.sock
device-plugin registered
```
> **Note**: It is also possible to run the FPGA device plugin using a non-root user. To do this,
the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
#### Run FPGA device plugin in region mode
```bash
$ export KUBE_CONF=/var/run/kubernetes/admin.kubeconfig # path to kubeconfig with admin's credentials
$ export NODE_NAME="<node name>" # if the node's name was overridden and differs from hostname
$ sudo -E ${INTEL_DEVICE_PLUGINS_SRC}/cmd/fpga_plugin/fpga_plugin -mode region -kubeconfig $KUBE_CONF
FPGA device plugin started in region mode
device-plugin start server at: /var/lib/kubelet/device-plugins/fpga.intel.com-region-ce48969398f05f33946d560708be108a.sock
device-plugin registered
```

View File

@ -3,19 +3,12 @@
Table of Contents
* [Introduction](#introduction)
* [Configuration options](#configuration-options)
* [Modes and Configuration Options](#modes-and-configuration-options)
* [Installation](#installation)
* [Deploy with pre-built container image](#deploy-with-pre-built-container-image)
* [Getting the source code](#getting-the-source-code)
* [Deploying as a DaemonSet](#deploying-as-a-daemonset)
* [Build the plugin image](#build-the-plugin-image)
* [Deploy plugin DaemonSet](#deploy-plugin-daemonset)
* [Fractional resources](#fractional-resources)
* [Deploy by hand](#deploy-by-hand)
* [Build the plugin](#build-the-plugin)
* [Run the plugin as administrator](#run-the-plugin-as-administrator)
* [Verify plugin registration](#verify-plugin-registration)
* [Testing the plugin](#testing-the-plugin)
* [Pre-built Images](#pre-built-images)
* [Fractional Resources](#fractional-resources)
* [Verify Plugin Registration](#verify-plugin-registration)
* [Testing and Demos](#testing-and-demos)
* [Issues with media workloads on multi-GPU setups](#issues-with-media-workloads-on-multi-gpu-setups)
* [Workaround for QSV and VA-API](#workaround-for-qsv-and-va-api)
@ -36,7 +29,7 @@ For example containers with Intel media driver (and components using that), can
video transcoding operations, and containers with the Intel OpenCL / oneAPI Level Zero
backend libraries can offload compute operations to GPU.
### Configuration options
## Modes and Configuration Options
| Flag | Argument | Default | Meaning |
|:---- |:-------- |:------- |:------- |
@ -54,7 +47,7 @@ The following sections detail how to obtain, build, deploy and test the GPU devi
Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis.
### Deploy with pre-built container image
### Pre-built Images
[Pre-built images](https://hub.docker.com/r/intel/intel-gpu-plugin)
of this component are available on the Docker hub. These images are automatically built and uploaded
@ -69,7 +62,7 @@ $ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/
daemonset.apps/intel-gpu-plugin created
```
Where `<RELEASE_VERSION>` needs to be substituted with the desired release version, e.g. `v0.18.0`.
Where `<RELEASE_VERSION>` needs to be substituted with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
Alternatively, if your cluster runs
[Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery),
@ -82,55 +75,7 @@ $ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/
daemonset.apps/intel-gpu-plugin created
```
Nothing else is needed. But if you want to deploy a customized version of the plugin read further.
### Getting the source code
```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
```
### Deploying as a DaemonSet
To deploy the gpu plugin as a daemonset, you first need to build a container image for the
plugin and ensure that is visible to your nodes.
#### Build the plugin image
The following will use `docker` to build a local container image called
`intel/intel-gpu-plugin` with the tag `devel`.
The image build tool can be changed from the default `docker` by setting the `BUILDER` argument
to the [`Makefile`](Makefile).
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make intel-gpu-plugin
...
Successfully tagged intel/intel-gpu-plugin:devel
```
#### Deploy plugin DaemonSet
You can then use the [example DaemonSet YAML](/deployments/gpu_plugin/base/intel-gpu-plugin.yaml)
file provided to deploy the plugin. The default kustomization that deploys the YAML as is:
```bash
$ kubectl apply -k deployments/gpu_plugin
daemonset.apps/intel-gpu-plugin created
```
Alternatively, if your cluster runs
[Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery),
you can deploy the device plugin only on nodes with Intel GPU.
The [nfd_labeled_nodes](/deployments/gpu_plugin/overlays/nfd_labeled_nodes/)
kustomization adds the nodeSelector to the DaemonSet:
```bash
$ kubectl apply -k deployments/gpu_plugin/overlays/nfd_labeled_nodes
daemonset.apps/intel-gpu-plugin created
```
Nothing else is needed. See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the plugin.
#### Fractional resources
@ -181,31 +126,7 @@ and the second container gets tile 1 from card 1 and tile 0 from card 2.
the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
### Deploy by hand
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
In this case, you do not need to build the complete container image, and can build just the plugin.
#### Build the plugin
First we build the plugin:
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make gpu_plugin
```
#### Run the plugin as administrator
Now we can run the plugin directly on the node:
```bash
$ sudo -E ${INTEL_DEVICE_PLUGINS_SRC}/cmd/gpu_plugin/gpu_plugin
device-plugin start server at: /var/lib/kubelet/device-plugins/gpu.intel.com-i915.sock
device-plugin registered
```
### Verify plugin registration
### Verify Plugin Registration
You can verify the plugin has been registered with the expected nodes by searching for the relevant
resource allocation status on the nodes:
@ -216,7 +137,7 @@ master
i915: 1
```
### Testing the plugin
## Testing and Demos
We can test the plugin is working by deploying an OpenCL image and running `clinfo`.
The sample OpenCL image can be built using `make intel-opencl-icd` and must be made

View File

@ -4,17 +4,9 @@ Table of Contents
* [Introduction](#introduction)
* [Installation](#installation)
* [Deploy with pre-built container image](#deploy-with-pre-built-container-image)
* [Getting the source code](#getting-the-source-code)
* [Verify node kubelet config](#verify-node-kubelet-config)
* [Deploying as a DaemonSet](#deploying-as-a-daemonset)
* [Build the plugin image](#build-the-plugin-image)
* [Deploy plugin DaemonSet](#deploy-plugin-daemonset)
* [Deploy by hand](#deploy-by-hand)
* [Build the plugin](#build-the-plugin)
* [Run the plugin as administrator](#run-the-plugin-as-administrator)
* [Pre-built images](#pre-built-images)
* [Verify plugin registration](#verify-plugin-registration)
* [Testing the plugin](#testing-the-plugin)
* [Testing and Demos](#testing-and-demos)
## Introduction
@ -26,42 +18,28 @@ The IAA plugin and operator optionally support provisioning of IAA devices and w
## Installation
The following sections detail how to obtain, build, deploy and test the IAA device plugin.
The following sections detail how to use the IAA device plugin.
### Getting the source code
### Pre-built Images
[Pre-built images](https://hub.docker.com/r/intel/intel-iaa-plugin)
of this component are available on the Docker hub. These images are automatically built and uploaded
to the hub from the latest main branch of this repository.
Release tagged images of the components are also available on the Docker hub, tagged with their
release version numbers in the format `x.y.z`, corresponding to the branches and releases in this
repository. Thus the easiest way to deploy the plugin in your cluster is to run this command
```bash
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes
```
### Deploying as a DaemonSet
To deploy the IAA plugin as a daemonset, you first need to build a container image for the
plugin and ensure that is visible to your nodes.
#### Build the plugin image
The following will use `docker` to build a local container image called
`intel/intel-iaa-plugin` with the tag `devel`.
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make intel-iaa-plugin
...
Successfully tagged intel/intel-iaa-plugin:devel
```
#### Deploy plugin DaemonSet
You can then use the [example DaemonSet YAML](/deployments/iaa_plugin/base/intel-iaa-plugin.yaml)
file provided to deploy the plugin. The default kustomization that deploys the YAML as is:
```bash
$ kubectl apply -k deployments/iaa_plugin
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/iaa_plugin?ref=<RELEASE_VERSION>
daemonset.apps/intel-iaa-plugin created
```
### Deploy with initcontainer
Where `<RELEASE_VERSION>` needs to be substituted with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
Nothing else is needed. See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the plugin.
#### Automatic Provisioning
There's a sample [idxd initcontainer](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/build/docker/intel-idxd-initcontainer.Dockerfile) included that provisions IAA devices and workqueues (1 engine / 1 group / 1 wq (user/dedicated)), to deploy:
@ -71,8 +49,6 @@ $ kubectl apply -k deployments/iaa_plugin/overlays/iaa_initcontainer/
The provisioning [script](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/demo/idxd-init.sh) and [template](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/demo/iaa.conf) are available for customization.
### Deploy with initcontainer and provisioning config in the ConfigMap
The provisioning config can be optionally stored in the ProvisioningConfig configMap which is then passed to initcontainer through the volume mount.
There's also a possibility for a node specific congfiguration through passing a nodename via NODE_NAME into initcontainer's environment and passing a node specific profile via configMap volume mount.
@ -83,29 +59,7 @@ To create a custom provisioning config:
$ kubectl create configmap --namespace=inteldeviceplugins-system intel-iaa-config --from-file=demo/iaa.conf
```
### Deploy by hand
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
In this case, you do not need to build the complete container image, and can build just the plugin.
#### Build the plugin
First we build the plugin:
```bash
$ make iaa_plugin
```
#### Run the plugin as administrator
Now we can run the plugin directly on the node:
```bash
$ sudo -E ./cmd/iaa_plugin/iaa_plugin
device-plugin registered
```
### Verify plugin registration
### Verify Plugin Registration
You can verify the plugin has been registered with the expected nodes by searching for the relevant
resource allocation status on the nodes:
@ -120,7 +74,7 @@ node1
iaa.intel.com/wq-user-shared: 30
```
### Testing the plugin
## Testing and Demos
We can test the plugin is working by deploying the provided example iaa-qpl-demo test image.

View File

@ -3,26 +3,18 @@
Table of Contents
* [Introduction](#introduction)
* [Modes and Configuration options](#modes-and-configuration-options)
* [Modes and Configuration options](#modes-and-configuration-options)
* [Installation](#installation)
* [Prerequisites](#prerequisites)
* [Pre-built image](#pre-built-image)
* [Getting the source code:](#getting-the-source-code)
* [Deploying as a DaemonSet](#deploying-as-a-daemonset)
* [Build the plugin image](#build-the-plugin-image)
* [Deploy the DaemonSet](#deploy-the-daemonset)
* [Verify QAT device plugin is registered:](#verify-qat-device-plugin-is-registered)
* [Deploying by hand](#deploying-by-hand)
* [Build QAT device plugin](#build-qat-device-plugin)
* [Deploy QAT plugin](#deploy-qat-plugin)
* [QAT device plugin Demos](#qat-device-plugin-demos)
* [DPDK QAT demos](#dpdk-qat-demos)
* [DPDK Prerequisites](#dpdk-prerequisites)
* [Build the image](#build-the-image)
* [Deploy the pod](#deploy-the-pod)
* [Manual test run](#manual-test-run)
* [Automated test run](#automated-test-run)
* [OpenSSL QAT demo](#openssl-qat-demo)
* [Pre-built images](#pre-built-images)
* [Verify Plugin Registration](#verify-plugin-registration)
* [Demos and Testing](#demos-and-testing)
* [DPDK QAT demos](#dpdk-qat-demos)
* [DPDK Prerequisites](#dpdk-prerequisites)
* [Deploy the pod](#deploy-the-pod)
* [Manual test run](#manual-test-run)
* [Automated test run](#automated-test-run)
* [OpenSSL QAT demo](#openssl-qat-demo)
* [Checking for hardware](#checking-for-hardware)
## Introduction
@ -44,7 +36,7 @@ Demonstrations are provided utilising [DPDK](https://doc.dpdk.org/) and [OpenSSL
[Kata Containers](https://katacontainers.io/) QAT integration is documented in the
[Kata Containers documentation repository][6].
### Modes and Configuration options
## Modes and Configuration options
The QAT plugin can take a number of command line arguments, summarised in the following table:
@ -100,7 +92,7 @@ are available via two methods. One of them must be installed and enabled:
The demonstrations have their own requirements, listed in their own specific sections.
### Pre-built image
### Pre-built Images
[Pre-built images](https://hub.docker.com/r/intel/intel-qat-plugin)
of this component are available on the Docker hub. These images are automatically built and uploaded
@ -114,7 +106,7 @@ repository. Thus the easiest way to deploy the plugin in your cluster is to run
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/qat_plugin?ref=<RELEASE_VERSION>
```
Where `<RELEASE_VERSION>` needs to be substituted with the desired release version, e.g. `v0.18.0`.
Where `<RELEASE_VERSION>` needs to be substituted with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
An alternative kustomization for deploying the plugin is with the debug mode switched on:
@ -122,72 +114,12 @@ An alternative kustomization for deploying the plugin is with the debug mode swi
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/qat_plugin/overlays/debug?ref=<RELEASE_VERSION>
```
The deployment YAML files supplied with the component in this repository use the images with the `devel`
tag by default. If you do not build your own local images, your Kubernetes cluster may pull down
the devel images from the Docker hub by default.
To use the release tagged versions of the images, edit the
[YAML deployment files](/deployments/qat_plugin/base/)
appropriately.
### Getting the source code
```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
```
### Deploying as a DaemonSet
To deploy the plugin as a DaemonSet, you first need to build a container image for the plugin and
ensure that is visible to your nodes. If you do not build your own plugin, your cluster may pull
the image from the pre-built Docker Hub images, depending on your configuration.
#### Build the plugin image
The following will use `docker` to build a local container image called `intel/intel-qat-plugin`
with the tag `devel`. The image build tool can be changed from the default docker by setting the
`BUILDER` argument to the [Makefile](/Makefile).
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make intel-qat-plugin
...
Successfully tagged intel/intel-qat-plugin:devel
```
> **Note**: `kernel` mode is excluded from the build by default. Run `make intel-qat-plugin-kerneldrv`
> to get a `kernel` mode enabled image.
#### Deploy the DaemonSet
Deploying the plugin involves first the deployment of a
[ConfigMap](/deployments/qat_plugin/base/intel-qat-plugin-config.yaml) and the
[DaemonSet YAML](/deployments/qat_plugin/base/intel-qat-plugin.yaml).
There is a kustomization for deploying both:
```bash
$ kubectl apply -k ${INTEL_DEVICE_PLUGINS_SRC}/deployments/qat_plugin
```
and an alternative kustomization for deploying the plugin in the debug mode:
```bash
$ kubectl apply -k ${INTEL_DEVICE_PLUGINS_SRC}/deployments/qat_plugin/overlays/debug
```
The third option is to deploy the `yaml`s separately:
```bash
$ kubectl create -f ${INTEL_DEVICE_PLUGINS_SRC}/deployments/qat_plugin/base/intel-qat-plugin-config.yaml
$ kubectl create -f ${INTEL_DEVICE_PLUGINS_SRC}/deployments/qat_plugin/base/intel-qat-plugin.yaml
```
> **Note**: It is also possible to run the QAT device plugin using a non-root user. To do this,
> the nodes' DAC rules must be configured to allow PCI driver unbinding/binding, device plugin
> socket creation and kubelet registration. Furthermore, the deployments `securityContext` must
> be configured with appropriate `runAsUser/runAsGroup`.
#### Verify QAT device plugin is registered
#### Verify Plugin Registration
Verification of the plugin deployment and detection of QAT hardware can be confirmed by
examining the resource allocations on the nodes:
@ -198,50 +130,12 @@ $ kubectl describe node <node name> | grep qat.intel.com/generic
qat.intel.com/generic: 10
```
### Deploying by hand
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
In this case, you do not need to build the complete container image, and can build just the plugin.
#### Build QAT device plugin
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make qat_plugin
```
#### Deploy QAT plugin
Deploy the plugin on a node by running it as `root`. The below is just an example - modify the
paramaters as necessary for your setup:
```bash
$ sudo -E ${INTEL_DEVICE_PLUGINS_SRC}/cmd/qat_plugin/qat_plugin \
-dpdk-driver igb_uio -kernel-vf-drivers dh895xccvf -max-num-devices 10 -debug
QAT device plugin started
Discovered Devices below:
03:01.0 device: corresponding DPDK device detected is uio0
03:01.1 device: corresponding DPDK device detected is uio1
03:01.2 device: corresponding DPDK device detected is uio2
03:01.3 device: corresponding DPDK device detected is uio3
03:01.4 device: corresponding DPDK device detected is uio4
03:01.5 device: corresponding DPDK device detected is uio5
03:01.6 device: corresponding DPDK device detected is uio6
03:01.7 device: corresponding DPDK device detected is uio7
03:02.0 device: corresponding DPDK device detected is uio8
03:02.1 device: corresponding DPDK device detected is uio9
The number of devices discovered are:10
device-plugin start server at: /var/lib/kubelet/device-plugins/intelQAT.sock
device-plugin registered
ListAndWatch: Sending device response
```
### QAT device plugin Demos
## Demos and Testing
The below sections cover `DPDK` and `OpenSSL` demos, both of which utilise the
QAT device plugin under Kubernetes.
#### DPDK QAT demos
### DPDK QAT demos
The Data Plane Development Kit (DPDK) QAT demos use DPDK
[crypto-perf](https://doc.dpdk.org/guides/tools/cryptoperf.html) and
@ -249,28 +143,14 @@ The Data Plane Development Kit (DPDK) QAT demos use DPDK
DPDK QAT Poll-Mode Drivers (PMD). For more information on the tools' parameters, refer to the
website links.
##### DPDK Prerequisites
#### DPDK Prerequisites
For the DPDK QAT demos to work, the DPDK drivers must be loaded and configured.
For more information, refer to:
[DPDK Getting Started Guide for Linux](https://doc.dpdk.org/guides/linux_gsg/index.html) and
[DPDK Getting Started Guide, Linux Drivers section](http://dpdk.org/doc/guides/linux_gsg/linux_drivers.html)
##### Build the image
The demo uses a container image. You can either use the
[pre-built image from the Docker Hub](https://hub.docker.com/r/intel/crypto-perf), or build your own local copy.
To build the DPDK demo image:
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make crypto-perf
...
Successfully tagged intel/crypto-perf:devel
```
##### Deploy the pod
#### Deploy the pod
In the pod specification file, add container resource request and limit.
For example, `qat.intel.com/generic: <number of devices>` for a container requesting QAT devices.
@ -278,7 +158,7 @@ For example, `qat.intel.com/generic: <number of devices>` for a container reques
For a DPDK-based workload, you may need to add hugepage request and limit.
```bash
$ kubectl apply -k ${INTEL_DEVICE_PLUGINS_SRC}/deployments/qat_dpdk_app/base/
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/qat_dpdk_app/base/
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
qat-dpdk 1/1 Running 0 27m
@ -289,7 +169,7 @@ $ kubectl get pods
> **Note**: If the `igb_uio` VF driver is used with the QAT device plugin,
> the workload be deployed with `SYS_ADMIN` capabilities added.
##### Manual test run
#### Manual test run
Manually execute the `dpdk-test-crypto-perf` application to review the logs:
@ -306,14 +186,14 @@ $ dpdk-test-crypto-perf -l 6-7 -w $QAT1 \
> **Note**: Adapt the `.so` versions to what the DPDK version in the container provides.
##### Automated test run
#### Automated test run
It is also possible to deploy and run `crypto-perf` using the following
`kustomize` overlays:
```bash
$ kubectl apply -k ${INTEL_DEVICE_PLUGINS_SRC}/deployments/qat_dpdk_app/test-crypto1
$ kubectl apply -k ${INTEL_DEVICE_PLUGINS_SRC}/deployments/qat_dpdk_app/test-compress1
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/qat_dpdk_app/test-crypto1
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/qat_dpdk_app/test-compress1
$ kubectl logs qat-dpdk-test-crypto-perf-tc1
$ kubectl logs qat-dpdk-test-compress-perf-tc1
```
@ -321,7 +201,7 @@ $ kubectl logs qat-dpdk-test-compress-perf-tc1
> **Note**: for `test-crypto1` and `test-compress1` to work, the cluster must enable
[Kubernetes CPU manager's](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/) `static` policy.
#### OpenSSL QAT demo
### OpenSSL QAT demo
Please refer to the [Kata Containers documentation][8] for details on the OpenSSL
QAT acceleration demo.

View File

@ -1,25 +1,18 @@
# Intel Software Guard Extensions (SGX) device plugin for Kubernetes
Contents
Table of Contents
* [Introduction](#introduction)
* [Modes and Configuration Options](#modes-and-configuration-options)
* [Installation](#installation)
* [Prerequisites](#prerequisites)
* [Backwards compatiblity note](#backwards-compatibility-note)
* [Deploying with Pre-built images](#deploying-with-pre-built-images)
* [Getting the source code](#getting-the-source-code)
* [Deploying as a DaemonSet](#deploying-as-a-daemonset)
* [Build the plugin image](#build-the-plugin-image)
* [Deploy the DaemonSet](#deploy-the-daemonset)
* [Verify SGX device plugin is registered](#verify-sgx-device-plugin-is-registered)
* [Deploying by hand](#deploying-by-hand)
* [Build SGX device plugin](#build-sgx-device-plugin)
* [Deploy SGX plugin](#deploy-sgx-plugin)
* [SGX device plugin demos](#sgx-device-plugin-demos)
* [SGX ECDSA Remote Attestation](#sgx-ecdsa-remote-attestation)
* [Remote Attestation Prerequisites](#remote-attestation-prerequisites)
* [Build the images](#build-the-image)
* [Deploy the pod](#deploy-the-pod)
* [Pre-built Images](#pre-built-images)
* [Verify Plugin Registration](#verify-plugin-registration)
* [Testing and Demos](#testing-and-demos)
* [SGX ECDSA Remote Attestation](#sgx-ecdsa-remote-attestation)
* [Remote Attestation Prerequisites](#remote-attestation-prerequisites)
* [Build the images](#build-the-image)
* [Deploy the pod](#deploy-the-pod)
## Introduction
@ -52,14 +45,12 @@ the complete list of logging related options.
## Installation
The following sections cover how to obtain, build and install the necessary Kubernetes SGX specific
The following sections cover how to use the necessary Kubernetes SGX specific
components.
They can be installed either using a DaemonSet or running 'by hand' on each node.
### Prerequisites
The component has the same basic dependancies as the
The component has the same basic dependencies as the
[generic plugin framework dependencies](../../README.md#about).
The SGX device plugin requires Linux Kernel SGX drivers to be available. These drivers
@ -68,7 +59,10 @@ is also known to work.
The hardware platform must support SGX Flexible Launch Control.
### Deploying with Pre-built images
The SGX deployment depends on having [cert-manager](https://cert-manager.io/)
installed. See its installation instructions [here](https://cert-manager.io/docs/installation/kubectl/).
### Pre-built Images
[Pre-built images](https://hub.docker.com/u/intel/)
are available on Docker Hub. These images are automatically built and uploaded
@ -83,9 +77,11 @@ The deployment YAML files supplied with the components in this repository use th
tag by default. If you do not build your own local images, your Kubernetes cluster may pull down
the devel images from Docker Hub by default.
`<RELEASE_VERSION>` needs to be substituted with the desired release version, e.g. `v0.19.0` or main.
Where `<RELEASE_VERSION>` needs to be substituted with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
#### Deploy node-feature-discovery
### Installation Using the Operator
First, deploy `node-feature-discovery`:
```bash
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd/overlays/sgx?ref=<RELEASE_VERSION>
@ -93,9 +89,9 @@ $ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/
```
**Note:** The [default configuration](/deployments/nfd/overlays/node-feature-rules/node-feature-rules.yaml) assumes that the in-tree driver is used and enabled (`CONFIG_X86_SGX=y`). If
the SGX DCAP out-of-tree driver is used, the `kernel.config` match expression in must be removed.
the SGX DCAP out-of-tree driver is used, the `kernel.config` match expression must be removed.
#### Deploy Intel Device plugin operator
Next, deploy the Intel Device plugin operator:
```bash
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/operator/default?ref=<RELEASE_VERSION>
@ -103,47 +99,15 @@ $ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/
**Note:** See the operator [deployment details](/cmd/operator/README.md) for its dependencies and for setting it up on systems behind proxies.
#### Deploy SGX device plugin with the operator
Finally, deploy the SGX device plugin with the operator
```bash
$ kubectl apply -f https://raw.githubusercontent.com/intel/intel-device-plugins-for-kubernetes/<RELEASE_VERSION>/deployments/operator/samples/deviceplugin_v1_sgxdeviceplugin.yaml
```
### Getting the source code
### Installation Using kubectl
```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
```
### Deploying as a DaemonSet
The SGX deployment documented here depends on having [cert-manager](https://cert-manager.io/)
installed. See its installation instructions [here](https://cert-manager.io/docs/installation/kubectl/).
You also need to build a container image for the plugin and ensure that is
visible to your nodes.
#### Build the plugin and EPC source images
The following will use `docker` to build a local container images called `intel/intel-sgx-plugin`
and `intel/intel-sgx-initcontainer` with the tag `devel`. The image build tool can be changed from the
default docker by setting the `BUILDER` argument to the [Makefile](/Makefile).
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make intel-sgx-plugin
...
Successfully tagged intel/intel-sgx-plugin:devel
$ make intel-sgx-initcontainer
...
Successfully tagged intel/intel-sgx-initcontainer:devel
```
#### Deploy the DaemonSet
There are two alternative ways to deploy SGX device plugin.
There are two alternative ways to deploy SGX device plugin using `kubectl`.
The first approach involves deployment of the [SGX DaemonSet YAML](/deployments/sgx_plugin/base/intel-sgx-plugin.yaml)
and [node-feature-discovery](/deployments/nfd/overlays/sgx/kustomization.yaml)
@ -151,17 +115,22 @@ with the necessary configuration.
There is a kustomization for deploying everything:
```bash
$ kubectl apply -k ${INTEL_DEVICE_PLUGINS_SRC}/deployments/sgx_plugin/overlays/epc-nfd/
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/sgx_plugin/overlays/epc-nfd/
```
The second approach has a lesser deployment footprint. It does not deploy NFD, but a helper daemonset that creates `sgx.intel.com/capable='true'` node label and advertises EPC capacity to the API server.
The second approach has a lesser deployment footprint. It does not require NFD, but a helper daemonset that creates `sgx.intel.com/capable='true'` node label and advertises EPC capacity to the API server.
The following kustomization is used for this approach:
```bash
$ kubectl apply -k ${INTEL_DEVICE_PLUGINS_SRC}/deployments/sgx_plugin/overlays/epc-register/
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/sgx_plugin/overlays/epc-register/
```
#### Verify SGX device plugin is registered:
Additionally, SGX admission webhook must be deployed
```bash
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/sgx_admissionwebhook/
```
### Verify Plugin Registration
Verification of the plugin deployment and detection of SGX hardware can be confirmed by
examining the resource allocations on the nodes:
@ -180,33 +149,8 @@ $ kubectl describe node <node name> | grep sgx.intel.com
sgx.intel.com/provision 1 1
```
### Deploying by hand
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
In this case, you do not need to build the complete container image, and can build just the plugin.
#### Build SGX device plugin
```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make sgx_plugin
```
#### Deploy SGX plugin
Deploy the plugin on a node by running it as `root`. The below is just an example - modify the
paramaters as necessary for your setup:
```bash
$ sudo -E ${INTEL_DEVICE_PLUGINS_SRC}/cmd/sgx_plugin/sgx_plugin -enclave-limit 50 -provision-limit 1 -v 2
I0626 20:33:01.414446 964346 server.go:219] Start server for provision at: /var/lib/kubelet/device-plugins/sgx.intel.com-provision.sock
I0626 20:33:01.414640 964346 server.go:219] Start server for enclave at: /var/lib/kubelet/device-plugins/sgx.intel.com-enclave.sock
I0626 20:33:01.417315 964346 server.go:237] Device plugin for provision registered
I0626 20:33:01.417748 964346 server.go:237] Device plugin for enclave registered
```
### SGX device plugin demos
#### SGX ECDSA Remote Attestation
## Testing and Demos
### SGX ECDSA Remote Attestation
The SGX remote attestation allows a relying party to verify that the software is running inside an Intel® SGX enclave on a platform
that has the trusted computing base up to date.
@ -216,7 +160,7 @@ SGX PCK Certificate Cache Service (PCCS) that is configured to service localhost
Read more about [SGX Remote Attestation](https://software.intel.com/content/www/us/en/develop/topics/software-guard-extensions/attestation-services.html).
##### Remote Attestation Prerequisites
#### Remote Attestation Prerequisites
For the SGX ECDSA Remote Attestation demo to work, the platform must be correctly registered and a PCCS running.
@ -226,7 +170,7 @@ For documentation to set up Intel® reference PCCS, refer to:
Furthermore, the Kubernetes cluster must be set up according the [instructions above](#deploying-with-pre-built-images).
##### Build the image
#### Build the image
The demo uses container images build from Intel® SGX SDK and DCAP releases.
@ -242,7 +186,7 @@ $ make sgx-sdk-demo
Successfully tagged intel/sgx-sdk-demo:devel
```
##### Deploy the pods
#### Deploy the pods
The demo runs Intel aesmd (architectural enclaves service daemon) that is responsible
for generating SGX quotes for workloads. It is deployed with `hostNetwork: true`

View File

@ -4,18 +4,9 @@ Table of Contents
* [Introduction](#introduction)
* [Installation](#installation)
* [Getting the source code](#getting-the-source-code)
* [Deploying as a DaemonSet](#deploying-as-a-daemonset)
* [Build the plugin image](#build-the-plugin-image)
* [Deploy plugin DaemonSet](#deploy-plugin-daemonset)
* [Deploy by hand](#deploy-by-hand)
* [Build the plugin](#build-the-plugin)
* [Run the plugin as administrator](#run-the-plugin-as-administrator)
* [Verify plugin registration](#verify-plugin-registration)
* [Testing the plugin](#testing-the-plugin)
* [Build a Docker image with an classification example](#build-a-docker-image-with-an-classification-example)
* [Create a job running unit tests off the local Docker image](#create-a-job-running-unit-tests-off-the-local-docker-image)
* [Review the job logs](#review-the-job-logs)
* [Pre-built Images](#pre-built-images)
* [Verify Plugin Registration](#verify-plugin-registration)
* [Testing and Demos](#testing-and-demos)
## Introduction
@ -47,86 +38,38 @@ This card has:
## Installation
The following sections detail how to obtain, build, deploy and test the VPU device plugin.
The following sections detail how to use the VPU device plugin.
Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis.
### Pre-built Images
### Getting the source code
[Pre-built images](https://hub.docker.com/r/intel/intel-vpu-plugin)
of this component are available on the Docker hub. These images are automatically built and uploaded
to the hub from the latest main branch of this repository.
> **Note:** It is presumed you have a valid and configured [golang](https://golang.org/) environment
> that meets the minimum required version.
Release tagged images of the components are also available on the Docker hub, tagged with their
release version numbers in the format `x.y.z`, corresponding to the branches and releases in this
repository. Thus the easiest way to deploy the plugin in your cluster is to run this command
```bash
$ mkdir -p $(go env GOPATH)/src/github.com/intel
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
```
### Deploying as a DaemonSet
To deploy the vpu plugin as a daemonset, you first need to build a container image for the
plugin and ensure that is visible to your nodes.
#### Build the plugin image
The following will use `docker` to build a local container image called
`intel/intel-vpu-plugin` with the tag `devel`.
The image build tool can be changed from the default `docker` by setting the `BUILDER` argument
to the [`Makefile`](/Makefile).
```bash
$ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
$ make intel-vpu-plugin
...
Successfully tagged intel/intel-vpu-plugin:devel
```
#### Deploy plugin DaemonSet
You can then use the [example DaemonSet YAML](/deployments/vpu_plugin/base/intel-vpu-plugin.yaml)
file provided to deploy the plugin. The default kustomization that deploys the YAML as is:
```bash
$ kubectl apply -k deployments/vpu_plugin
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/vpu_plugin?ref=<RELEASE_VERSION>
daemonset.apps/intel-vpu-plugin created
```
Where `<RELEASE_VERSION>` needs to be substituted with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
For xlink device, deploy DaemonSet as
```bash
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/vpu_plugin/overlays/xlink
daemonset.apps/intel-vpu-plugin created
```
Nothing else is needed. See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the plugin.
> **Note**: It is also possible to run the VPU device plugin using a non-root user. To do this,
the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
For xlink device, deploy DaemonSet as
```bash
$ kubectl apply -k deployments/vpu_plugin/overlays/xlink
daemonset.apps/intel-vpu-plugin created
```
### Deploy by hand
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
In this case, you do not need to build the complete container image, and can build just the plugin.
#### Build the plugin
First we build the plugin:
> **Note:** this vpu plugin has dependency of libusb-1.0-0-dev, you need install it before building vpu plugin
```bash
$ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
$ make vpu_plugin
```
#### Run the plugin as administrator
Now we can run the plugin directly on the node:
```bash
$ sudo $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes/cmd/vpu_plugin/vpu_plugin
VPU device plugin started
```
### Verify plugin registration
### Verify Plugin Registration
You can verify the plugin has been registered with the expected nodes by searching for the relevant
resource allocation status on the nodes:
@ -137,11 +80,11 @@ vcaanode00
hddl: 12
```
### Testing the plugin
## Testing and Demos
We can test the plugin is working by deploying the provided example OpenVINO image with HDDL plugin enabled.
#### Build a Docker image with an classification example
### Build a Docker image with an classification example
```bash
$ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
@ -150,7 +93,7 @@ $ make ubuntu-demo-openvino
Successfully tagged intel/ubuntu-demo-openvino:devel
```
#### Create a job running unit tests off the local Docker image
### Create a job running unit tests off the local Docker image
```bash
$ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
@ -158,7 +101,7 @@ $ kubectl apply -f demo/intelvpu-job.yaml
job.batch/intelvpu-demo-job created
```
#### Review the job logs
### Review the job logs
```bash
$ kubectl get pods | fgrep intelvpu