Documentation: Fixes broken links and standardizes headers.

Signed-off-by: Kevin Putnam <kevin.putnam@intel.com>
This commit is contained in:
Kevin Putnam 2020-09-14 12:19:13 -07:00
parent 429770c486
commit 1d149ffee6
12 changed files with 163 additions and 167 deletions

View File

@ -1,5 +1,7 @@
How to develop simple device plugins # Development
====================================
## How to develop simple device plugins
To create a simple device plugin without the hassle of developing your own gRPC To create a simple device plugin without the hassle of developing your own gRPC
server, you can use a package included in this repository called server, you can use a package included in this repository called
@ -62,8 +64,7 @@ Optionally, your device plugin may also implement the
before they are sent to `kubelet`. To see an example, refer to the FPGA before they are sent to `kubelet`. To see an example, refer to the FPGA
plugin which implements this interface to annotate its responses. plugin which implements this interface to annotate its responses.
Logging ### Logging
-------
The framework uses [`klog`](https://github.com/kubernetes/klog) as its logging The framework uses [`klog`](https://github.com/kubernetes/klog) as its logging
framework. It is encouraged for plugins to also use `klog` to maintain uniformity framework. It is encouraged for plugins to also use `klog` to maintain uniformity
@ -84,8 +85,7 @@ The default is to not log `Info()` calls. This can be changed using the plugin c
line `-v` parameter. The additional annotations prepended to log lines by 'klog' can be disabled line `-v` parameter. The additional annotations prepended to log lines by 'klog' can be disabled
with the `-skip_headers` option. with the `-skip_headers` option.
Error Conventions ### Error Conventions
-----------------
The framework has a convention for producing and logging errors. Ideally plugins will also adhere The framework has a convention for producing and logging errors. Ideally plugins will also adhere
to the convention. to the convention.
@ -122,8 +122,7 @@ Otherwise, they can be logged as simple values:
klog.Warningf("Example of a warning due to an external error: %v", err) klog.Warningf("Example of a warning due to an external error: %v", err)
``` ```
How to build against a newer version of Kubernetes ## How to build against a newer version of Kubernetes
==================================================
First you need to update module dependencies. The easiest way is to use the First you need to update module dependencies. The easiest way is to use the
script copied from https://github.com/kubernetes/kubernetes/issues/79384#issuecomment-521493597: script copied from https://github.com/kubernetes/kubernetes/issues/79384#issuecomment-521493597:
@ -163,4 +162,4 @@ $ make generate
$ make test $ make test
``` ```
and fix all new compilation issues. and fix all new compilation issues.

View File

@ -1,11 +1,14 @@
# Intel® Device Plugins for Kubernetes # Overview
[![Build Status](https://github.com/intel/intel-device-plugins-for-kubernetes/workflows/CI/badge.svg?branch=master)](https://github.com/intel/intel-device-plugins-for-kubernetes/actions?query=workflow%3ACI) [![Build Status](https://github.com/intel/intel-device-plugins-for-kubernetes/workflows/CI/badge.svg?branch=master)](https://github.com/intel/intel-device-plugins-for-kubernetes/actions?query=workflow%3ACI)
[![Go Report Card](https://goreportcard.com/badge/github.com/intel/intel-device-plugins-for-kubernetes)](https://goreportcard.com/report/github.com/intel/intel-device-plugins-for-kubernetes) [![Go Report Card](https://goreportcard.com/badge/github.com/intel/intel-device-plugins-for-kubernetes)](https://goreportcard.com/report/github.com/intel/intel-device-plugins-for-kubernetes)
[![GoDoc](https://godoc.org/github.com/intel/intel-device-plugins-for-kubernetes/pkg/deviceplugin?status.svg)](https://godoc.org/github.com/intel/intel-device-plugins-for-kubernetes/pkg/deviceplugin) [![GoDoc](https://godoc.org/github.com/intel/intel-device-plugins-for-kubernetes/pkg/deviceplugin?status.svg)](https://godoc.org/github.com/intel/intel-device-plugins-for-kubernetes/pkg/deviceplugin)
## Table of Contents This repository contains a framework for developing plugins for the Kubernetes
[device plugins framework](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/),
along with a number of device plugin implementations utilising that framework.
Table of Contents
* [About](#about)
* [Prerequisites](#prerequisites) * [Prerequisites](#prerequisites)
* [Plugins](#plugins) * [Plugins](#plugins)
* [GPU device plugin](#gpu-device-plugin) * [GPU device plugin](#gpu-device-plugin)
@ -22,12 +25,6 @@
* [Supported Kubernetes versions](#supported-kubernetes-versions) * [Supported Kubernetes versions](#supported-kubernetes-versions)
* [Related code](#related-code) * [Related code](#related-code)
## About
This repository contains a framework for developing plugins for the Kubernetes
[device plugins framework](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/),
along with a number of device plugin implementations utilising that framework.
## Prerequisites ## Prerequisites
Prerequisites for building and running these device plugins include: Prerequisites for building and running these device plugins include:
@ -212,7 +209,7 @@ $ KUBEBUILDER_ASSETS=${HOME}/work/kubebuilder-assets make envtest
## Supported Kubernetes versions ## Supported Kubernetes versions
Releases are made under the github [releases area](../../releases). Supported releases and Releases are made under the github [releases area](https://github.com/intel/intel-device-plugins-for-kubernetes/releases). Supported releases and
matching Kubernetes versions are listed below: matching Kubernetes versions are listed below:
| Branch | Kubernetes branch/version | | Branch | Kubernetes branch/version |
@ -227,4 +224,4 @@ matching Kubernetes versions are listed below:
## Related code ## Related code
A related Intel SRIOV network device plugin can be found in [this repository](https://github.com/intel/sriov-network-device-plugin) A related Intel SRIOV network device plugin can be found in [this repository](https://github.com/intel/sriov-network-device-plugin)

View File

@ -1,6 +1,6 @@
# Intel FPGA admission controller for Kubernetes # Intel FPGA admission controller for Kubernetes
# Table of Contents Table of Contents
* [Introduction](#introduction) * [Introduction](#introduction)
* [Dependencies](#dependencies) * [Dependencies](#dependencies)
@ -10,7 +10,7 @@
* [Mappings](#mappings) * [Mappings](#mappings)
* [Next steps](#next-steps) * [Next steps](#next-steps)
# Introduction ## Introduction
The FPGA admission controller is one of the components used to add support for Intel FPGA The FPGA admission controller is one of the components used to add support for Intel FPGA
devices to Kubernetes. devices to Kubernetes.
@ -31,7 +31,7 @@ The admission controller also keeps the user from bypassing namespaced mapping r
by denying admission of any pods that are trying to use internal knowledge of InterfaceID or by denying admission of any pods that are trying to use internal knowledge of InterfaceID or
Bitstream ID environment variables used by the prestart hook. Bitstream ID environment variables used by the prestart hook.
# Dependencies ## Dependencies
This component is one of a set of components that work together. You may also want to This component is one of a set of components that work together. You may also want to
install the following: install the following:
@ -42,12 +42,12 @@ install the following:
All components have the same basic dependencies as the All components have the same basic dependencies as the
[generic plugin framework dependencies](../../README.md#about) [generic plugin framework dependencies](../../README.md#about)
# Installation ## Installation
The following sections detail how to obtain, build and deploy the admission The following sections detail how to obtain, build and deploy the admission
controller webhook plugin. controller webhook plugin.
## Pre-requisites ### Pre-requisites
The webhook depends on having [cert-manager](https://cert-manager.io/) The webhook depends on having [cert-manager](https://cert-manager.io/)
installed: installed:
@ -89,7 +89,7 @@ spec:
... ...
``` ```
## Deployment ### Deployment
To deploy the webhook, run To deploy the webhook, run
@ -108,7 +108,7 @@ issuer.cert-manager.io/intelfpgawebhook-selfsigned-issuer created
``` ```
Now you can deploy your mappings. Now you can deploy your mappings.
# Mappings ## Mappings
Mappings is a an essential part of the setup that gives a flexible instrument to a cluster Mappings is a an essential part of the setup that gives a flexible instrument to a cluster
administrator to manage FPGA bitstreams and to control access to them. Being a set of administrator to manage FPGA bitstreams and to control access to them. Being a set of
@ -148,12 +148,12 @@ bitstream to a region before the container is started.
Mappings of resource names are configured with objects of `AcceleratorFunction` and Mappings of resource names are configured with objects of `AcceleratorFunction` and
`FpgaRegion` custom resource definitions found respectively in `FpgaRegion` custom resource definitions found respectively in
[`./deployment/fpga_admissionwebhook/crd/bases/fpga.intel.com_af.yaml`](../../deployment/fpga_admissionwebhook/crd/bases/fpga.intel.com_af.yaml) [`./deployment/fpga_admissionwebhook/crd/bases/fpga.intel.com_af.yaml`](/deployments/fpga_admissionwebhook/crd/bases/fpga.intel.com_acceleratorfunctions.yaml)
and [`./deployment/fpga_admissionwebhook/crd/bases/fpga.intel.com_region.yaml`](../../deployment/fpga_admissionwebhook/crd/bases/fpga.intel.com_region.yaml). and [`./deployment/fpga_admissionwebhook/crd/bases/fpga.intel.com_region.yaml`](/deployments/fpga_admissionwebhook/crd/bases/fpga.intel.com_fpgaregions.yaml).
Mappings between 'names' and 'ID's are controlled by the admission controller Mappings between 'names' and 'ID's are controlled by the admission controller
mappings collection file found in mappings collection file found in
[`./deployments/fpga_admissionwebhook/mappings-collection.yaml`](../../deployments/fpga_admissionwebhook/mappings-collection.yaml). [`./deployments/fpga_admissionwebhook/mappings-collection.yaml`](/deployments/fpga_admissionwebhook/mappings-collection.yaml).
This mappings file can be deployed with This mappings file can be deployed with
```bash ```bash
@ -163,6 +163,6 @@ $ kubectl apply -f https://raw.githubusercontent.com/intel/intel-device-plugins-
Note that the mappings are scoped to the namespaces they were created in Note that the mappings are scoped to the namespaces they were created in
and they are applicable to pods created in the corresponding namespaces. and they are applicable to pods created in the corresponding namespaces.
# Next steps ## Next steps
Continue with [FPGA prestart CRI-O hook](../fpga_crihook/README.md). Continue with [FPGA prestart CRI-O hook](../fpga_crihook/README.md).

View File

@ -1,6 +1,6 @@
# Intel FPGA prestart CRI-O webhook for Kubernetes # Intel FPGA prestart CRI-O webhook for Kubernetes
# Table of Contents Table of Contents
* [Introduction](#introduction) * [Introduction](#introduction)
* [Dependencies](#dependencies) * [Dependencies](#dependencies)
@ -9,27 +9,27 @@
* [Building the image](#building-the-image) * [Building the image](#building-the-image)
* [Configuring CRI-O](#configuring-cri-o) * [Configuring CRI-O](#configuring-cri-o)
# Introduction ## Introduction
The FPGA CRI-O webhook is one of the components used to add support for Intel FPGA The FPGA CRI-O webhook is one of the components used to add support for Intel FPGA
devices to Kubernetes. devices to Kubernetes.
The FPGA prestart CRI-O hook is triggered by container annotations, such as set by the The FPGA prestart CRI-O hook is triggered by container annotations, such as set by the
[FPGA device plugin](../fpga_plugin). It performs discovery of the requested FPGA [FPGA device plugin](../fpga_plugin/README.md). It performs discovery of the requested FPGA
function bitstream and then programs FPGA devices based on the environment variables function bitstream and then programs FPGA devices based on the environment variables
in the workload description. in the workload description.
The CRI-O prestart hook is only *required* when the The CRI-O prestart hook is only *required* when the
[FPGA admission webhook](../fpga_admissionwebhook) is configured for orchestration [FPGA admission webhook](../fpga_admissionwebhook/README.md) is configured for orchestration
programmed mode, and is benign (un-used) otherwise. programmed mode, and is benign (un-used) otherwise.
> **Note:** The fpga CRI-O webhook is usually installed by the same DaemonSet as the > **Note:** The fpga CRI-O webhook is usually installed by the same DaemonSet as the
> FPGA device plugin. If building and installing the CRI-O webhook by hand, it is > FPGA device plugin. If building and installing the CRI-O webhook by hand, it is
> recommended you reference the > recommended you reference the
> [fpga plugin DaemonSet YAML](../../deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml) for > [fpga plugin DaemonSet YAML](/deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml ) for
> more details. > more details.
# Dependencies ## Dependencies
This component is one of a set of components that work together. You may also want to This component is one of a set of components that work together. You may also want to
install the following: install the following:
@ -40,19 +40,19 @@ install the following:
All components have the same basic dependencies as the All components have the same basic dependencies as the
[generic plugin framework dependencies](../../README.md#about) [generic plugin framework dependencies](../../README.md#about)
# Building ## Building
The following sections detail how to obtain, build and deploy the CRI-O The following sections detail how to obtain, build and deploy the CRI-O
prestart hook. prestart hook.
## Getting the source code ### Getting the source code
```bash ```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes $ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC} $ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
``` ```
## Building the image ### Building the image
```bash ```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC} $ cd ${INTEL_DEVICE_PLUGINS_SRC}
@ -61,11 +61,11 @@ $ make intel-fpga-initcontainer
Successfully tagged intel/intel-fpga-initcontainer:devel Successfully tagged intel/intel-fpga-initcontainer:devel
``` ```
# Configuring CRI-O ## Configuring CRI-O
Recent versions of [CRI-O](https://github.com/cri-o/cri-o) are shipped with default configuration Recent versions of [CRI-O](https://github.com/cri-o/cri-o) are shipped with default configuration
file that prevents CRI-O to discover and configure hooks automatically. file that prevents CRI-O to discover and configure hooks automatically.
For FPGA orchestration programmed mode, the OCI hooks are the key component. For FPGA orchestration programmed mode, the OCI hooks are the key component.
Please ensure that your `/etc/crio/crio.conf` parameter `hooks_dir` is either unset Please ensure that your `/etc/crio/crio.conf` parameter `hooks_dir` is either unset
(to enable default search paths for OCI hooks configuration) or contains the directory (to enable default search paths for OCI hooks configuration) or contains the directory
`/etc/containers/oci/hooks.d`. `/etc/containers/oci/hooks.d`.

View File

@ -1,6 +1,6 @@
# Intel FPGA device plugin for Kubernetes # Intel FPGA device plugin for Kubernetes
# Table of Contents Table of Contents
* [Introduction](#introduction) * [Introduction](#introduction)
* [Component overview](#component-overview) * [Component overview](#component-overview)
@ -18,7 +18,7 @@
* [Run FPGA device plugin in af mode](#run-fpga-device-plugin-in-af-mode) * [Run FPGA device plugin in af mode](#run-fpga-device-plugin-in-af-mode)
* [Run FPGA device plugin in region mode](#run-fpga-device-plugin-in-region-mode) * [Run FPGA device plugin in region mode](#run-fpga-device-plugin-in-region-mode)
# Introduction ## Introduction
This FPGA device plugin is part of a collection of Kubernetes components found within this This FPGA device plugin is part of a collection of Kubernetes components found within this
repository that enable integration of Intel FPGA hardware into Kubernetes. repository that enable integration of Intel FPGA hardware into Kubernetes.
@ -38,7 +38,7 @@ The components together implement the following features:
- orchestration of FPGA programming - orchestration of FPGA programming
- access control for FPGA hardware - access control for FPGA hardware
# Component overview ## Component overview
The following components are part of this repository, and work together to support Intel FPGAs under The following components are part of this repository, and work together to support Intel FPGAs under
Kubernetes: Kubernetes:
@ -70,7 +70,7 @@ Kubernetes:
The repository also contains an [FPGA helper tool](../fpga_tool/README.md) that may be useful during The repository also contains an [FPGA helper tool](../fpga_tool/README.md) that may be useful during
development, initial deployment and debugging. development, initial deployment and debugging.
# FPGA modes ## FPGA modes
The FPGA plugin set can run in one of two modes: The FPGA plugin set can run in one of two modes:
@ -95,14 +95,14 @@ af mode:
![Overview of `af` mode](pictures/FPGA-af.png) ![Overview of `af` mode](pictures/FPGA-af.png)
# Installation ## Installation
The below sections cover how to obtain, build and install this component. The below sections cover how to obtain, build and install this component.
Components can generally be installed either using DaemonSets or running them Components can generally be installed either using DaemonSets or running them
'by hand' on each node. 'by hand' on each node.
## Pre-built images ### Pre-built images
Pre-built images of the components are available on the [Docker hub](https://hub.docker.com/u/intel). Pre-built images of the components are available on the [Docker hub](https://hub.docker.com/u/intel).
These images are automatically built and uploaded to the hub from the latest `master` branch of These images are automatically built and uploaded to the hub from the latest `master` branch of
@ -123,7 +123,7 @@ The following images are available on the Docker hub:
- [The FPGA admisson webhook](https://hub.docker.com/r/intel/intel-fpga-admissionwebhook) - [The FPGA admisson webhook](https://hub.docker.com/r/intel/intel-fpga-admissionwebhook)
- [The FPGA CRI-O prestart hook (in the `initcontainer` image)](https://hub.docker.com/r/intel/intel-fpga-initcontainer) - [The FPGA CRI-O prestart hook (in the `initcontainer` image)](https://hub.docker.com/r/intel/intel-fpga-initcontainer)
## Dependencies ### Dependencies
All components have the same basic dependencies as the All components have the same basic dependencies as the
[generic plugin framework dependencies](../../README.md#about) [generic plugin framework dependencies](../../README.md#about)
@ -136,7 +136,7 @@ major components:
- [FPGA prestart CRI-O hook](../fpga_crihook/README.md) - [FPGA prestart CRI-O hook](../fpga_crihook/README.md)
The CRI-O hook is only *required* if `region` mode is being used, but is installed by default by the The CRI-O hook is only *required* if `region` mode is being used, but is installed by default by the
[FPGA plugin DaemonSet YAML](../../deployments/fpga_plugin/fpga_plugin.yaml), and is benign [FPGA plugin DaemonSet YAML](/deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml), and is benign
in `af` mode. in `af` mode.
If using the `af` mode, and therefore *not* using the If using the `af` mode, and therefore *not* using the
@ -153,7 +153,7 @@ which is present and thus to use:
Install this component (FPGA device plugin) first, and then follow the links Install this component (FPGA device plugin) first, and then follow the links
and instructions to install the other components. and instructions to install the other components.
## Getting the source code ### Getting the source code
To obtain the YAML files used for deployment, or to obtain the source tree if you intend to To obtain the YAML files used for deployment, or to obtain the source tree if you intend to
do a hand-deployment or build your own image, you will require access to the source code: do a hand-deployment or build your own image, you will require access to the source code:
@ -163,7 +163,7 @@ $ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC} $ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
``` ```
## Verify node kubelet config ### Verify node kubelet config
Every node that will be running the FPGA plugin must have the Every node that will be running the FPGA plugin must have the
[kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) [kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
@ -174,7 +174,7 @@ $ ls /var/lib/kubelet/device-plugins/kubelet.sock
/var/lib/kubelet/device-plugins/kubelet.sock /var/lib/kubelet/device-plugins/kubelet.sock
``` ```
## Deploying as a DaemonSet ### Deploying as a DaemonSet
As a pre-requisite you need to have [cert-manager](https://cert-manager.io) As a pre-requisite you need to have [cert-manager](https://cert-manager.io)
up and running: up and running:
@ -249,11 +249,11 @@ $ kubectl annotate node <node_name> 'fpga.intel.com/device-plugin-mode=af'
``` ```
And restart the pods on the nodes. And restart the pods on the nodes.
> **Note:** The FPGA plugin [DaemonSet YAML](../../deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml) > **Note:** The FPGA plugin [DaemonSet YAML](/deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml)
> also deploys the [FPGA CRI-O hook](../fpga_criohook) `initcontainer` image, but it will be > also deploys the [FPGA CRI-O hook](../fpga_crihook/README.md) `initcontainer` image, but it will be
> benign (un-used) when running the FPGA plugin in `af` mode. > benign (un-used) when running the FPGA plugin in `af` mode.
### Verify plugin registration #### Verify plugin registration
Verify the FPGA plugin has been deployed on the nodes. The below shows the output Verify the FPGA plugin has been deployed on the nodes. The below shows the output
you can expect in `region` mode, but similar output should be expected for `af` you can expect in `region` mode, but similar output should be expected for `af`
@ -265,20 +265,20 @@ fpga.intel.com/region-ce48969398f05f33946d560708be108a: 1
fpga.intel.com/region-ce48969398f05f33946d560708be108a: 1 fpga.intel.com/region-ce48969398f05f33946d560708be108a: 1
``` ```
### Building the plugin image #### Building the plugin image
If you need to build your own image from sources, and are not using the images If you need to build your own image from sources, and are not using the images
available on the Docker Hub, follow the below details. available on the Docker Hub, follow the below details.
> **Note:** The FPGA plugin [DaemonSet YAML](../../deployments/fpga_plugin/fpga_plugin.yaml) > **Note:** The FPGA plugin [DaemonSet YAML](/deployments/fpga_plugin/fpga_plugin.yaml)
> also deploys the [FPGA CRI-O hook](../fpga_criohook) `initcontainer` image as well. You may > also deploys the [FPGA CRI-O hook](../fpga_crihook/README.md) `initcontainer` image as well. You may
> also wish to build that image locally before deploying the FPGA plugin to avoid deploying > also wish to build that image locally before deploying the FPGA plugin to avoid deploying
> the Docker hub default image. > the Docker hub default image.
The following will use `docker` to build a local container image called The following will use `docker` to build a local container image called
`intel/intel-fpga-plugin` with the tag `devel`. `intel/intel-fpga-plugin` with the tag `devel`.
The image build tool can be changed from the default docker by setting the `BUILDER` argument The image build tool can be changed from the default docker by setting the `BUILDER` argument
to the [Makefile](../../Makefile). to the [Makefile](/Makefile).
```bash ```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC} $ cd ${INTEL_DEVICE_PLUGINS_SRC}
@ -290,10 +290,10 @@ Successfully tagged intel/intel-fpga-plugin:devel
This image launches `fpga_plugin` in `af` mode by default. This image launches `fpga_plugin` in `af` mode by default.
To use your own container image, create you own kustomization overlay patching To use your own container image, create you own kustomization overlay patching
[`deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml`](../../deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml) [`deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml`](/deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml)
file. file.
## Deploy by hand ### Deploy by hand
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' For development purposes, it is sometimes convenient to deploy the plugin 'by hand'
on a node. In this case, you do not need to build the complete container image, on a node. In this case, you do not need to build the complete container image,
@ -303,7 +303,7 @@ and can build just the plugin.
> to be configured or installed. It is recommended you reference the actions of the > to be configured or installed. It is recommended you reference the actions of the
> DaemonSet YAML deployment for more details. > DaemonSet YAML deployment for more details.
### Build FPGA device plugin #### Build FPGA device plugin
When deploying by hand, you only need to build the plugin itself, and not the whole When deploying by hand, you only need to build the plugin itself, and not the whole
container image: container image:
@ -313,7 +313,7 @@ $ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make fpga_plugin $ make fpga_plugin
``` ```
### Run FPGA device plugin in af mode #### Run FPGA device plugin in af mode
```bash ```bash
$ export KUBE_CONF=/var/run/kubernetes/admin.kubeconfig # path to kubeconfig with admin's credentials $ export KUBE_CONF=/var/run/kubernetes/admin.kubeconfig # path to kubeconfig with admin's credentials
@ -328,7 +328,7 @@ device-plugin registered
the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration. the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`. Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
### Run FPGA device plugin in region mode #### Run FPGA device plugin in region mode
```bash ```bash
$ export KUBE_CONF=/var/run/kubernetes/admin.kubeconfig # path to kubeconfig with admin's credentials $ export KUBE_CONF=/var/run/kubernetes/admin.kubeconfig # path to kubeconfig with admin's credentials
@ -337,4 +337,4 @@ $ sudo -E ${INTEL_DEVICE_PLUGINS_SRC}/cmd/fpga_plugin/fpga_plugin -mode region -
FPGA device plugin started in region mode FPGA device plugin started in region mode
device-plugin start server at: /var/lib/kubelet/device-plugins/fpga.intel.com-region-ce48969398f05f33946d560708be108a.sock device-plugin start server at: /var/lib/kubelet/device-plugins/fpga.intel.com-region-ce48969398f05f33946d560708be108a.sock
device-plugin registered device-plugin registered
``` ```

View File

@ -1,11 +1,11 @@
# Intel FPGA test tool # Intel FPGA test tool
# Introduction ## Introduction
This directory contains an FPGA test tool that can be used to locate, examine and program Intel This directory contains an FPGA test tool that can be used to locate, examine and program Intel
FPGAs. FPGAs.
## Command line and usage ### Command line and usage
The tool has the following command line arguments: The tool has the following command line arguments:
@ -18,12 +18,12 @@ and the following command line options:
```bash ```bash
Usage of ./fpga_tool: Usage of ./fpga_tool:
-b string -b string
Path to bitstream file (GBS or AOCX) Path to bitstream file (GBS or AOCX)
-d string -d string
Path to device node (FME or Port) Path to device node (FME or Port)
-dry-run -dry-run
Don't write/program, just validate and log Don't write/program, just validate and log
-force -force
Force overwrite operation for installing bitstreams Force overwrite operation for installing bitstreams
-q Quiet mode. Only errors will be reported -q Quiet mode. Only errors will be reported
``` ```

View File

@ -1,6 +1,6 @@
# Intel GPU device plugin for Kubernetes # Intel GPU device plugin for Kubernetes
# Table of Contents Table of Contents
* [Introduction](#introduction) * [Introduction](#introduction)
* [Installation](#installation) * [Installation](#installation)
@ -16,7 +16,7 @@
* [Verify plugin registration](#verify-plugin-registration) * [Verify plugin registration](#verify-plugin-registration)
* [Testing the plugin](#testing-the-plugin) * [Testing the plugin](#testing-the-plugin)
# Introduction ## Introduction
The GPU device plugin for Kubernetes supports acceleration using the following Intel GPU hardware families: The GPU device plugin for Kubernetes supports acceleration using the following Intel GPU hardware families:
@ -35,13 +35,13 @@ For example, the Intel Media SDK can offload video transcoding operations, and t
The device plugin can also be used with [GVT-d](https://github.com/intel/gvt-linux/wiki/GVTd_Setup_Guide) device The device plugin can also be used with [GVT-d](https://github.com/intel/gvt-linux/wiki/GVTd_Setup_Guide) device
passthrough and acceleration. passthrough and acceleration.
# Installation ## Installation
The following sections detail how to obtain, build, deploy and test the GPU device plugin. The following sections detail how to obtain, build, deploy and test the GPU device plugin.
Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis. Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis.
## Deploy with pre-built container image ### Deploy with pre-built container image
[Pre-built images](https://hub.docker.com/r/intel/intel-gpu-plugin) [Pre-built images](https://hub.docker.com/r/intel/intel-gpu-plugin)
of this component are available on the Docker hub. These images are automatically built and uploaded of this component are available on the Docker hub. These images are automatically built and uploaded
@ -71,14 +71,14 @@ daemonset.apps/intel-gpu-plugin created
Nothing else is needed. But if you want to deploy a customized version of the plugin read further. Nothing else is needed. But if you want to deploy a customized version of the plugin read further.
## Getting the source code ### Getting the source code
```bash ```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes $ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC} $ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
``` ```
## Verify node kubelet config ### Verify node kubelet config
Every node that will be running the gpu plugin must have the Every node that will be running the gpu plugin must have the
[kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) [kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
@ -89,12 +89,12 @@ $ ls /var/lib/kubelet/device-plugins/kubelet.sock
/var/lib/kubelet/device-plugins/kubelet.sock /var/lib/kubelet/device-plugins/kubelet.sock
``` ```
## Deploying as a DaemonSet ### Deploying as a DaemonSet
To deploy the gpu plugin as a daemonset, you first need to build a container image for the To deploy the gpu plugin as a daemonset, you first need to build a container image for the
plugin and ensure that is visible to your nodes. plugin and ensure that is visible to your nodes.
### Build the plugin image #### Build the plugin image
The following will use `docker` to build a local container image called The following will use `docker` to build a local container image called
`intel/intel-gpu-plugin` with the tag `devel`. `intel/intel-gpu-plugin` with the tag `devel`.
@ -109,9 +109,9 @@ $ make intel-gpu-plugin
Successfully tagged intel/intel-gpu-plugin:devel Successfully tagged intel/intel-gpu-plugin:devel
``` ```
### Deploy plugin DaemonSet #### Deploy plugin DaemonSet
You can then use the [example DaemonSet YAML](../../deployments/gpu_plugin/base/intel-gpu-plugin.yaml) You can then use the [example DaemonSet YAML](/deployments/gpu_plugin/base/intel-gpu-plugin.yaml)
file provided to deploy the plugin. The default kustomization that deploys the YAML as is: file provided to deploy the plugin. The default kustomization that deploys the YAML as is:
```bash ```bash
@ -122,7 +122,7 @@ daemonset.apps/intel-gpu-plugin created
Alternatively, if your cluster runs Alternatively, if your cluster runs
[Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery), [Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery),
you can deploy the device plugin only on nodes with Intel GPU. you can deploy the device plugin only on nodes with Intel GPU.
The [nfd_labeled_nodes](../../deployments/gpu_plugin/overlays/nfd_labeled_nodes/) The [nfd_labeled_nodes](/deployments/gpu_plugin/overlays/nfd_labeled_nodes/)
kustomization adds the nodeSelector to the DaemonSet: kustomization adds the nodeSelector to the DaemonSet:
```bash ```bash
@ -134,12 +134,12 @@ daemonset.apps/intel-gpu-plugin created
the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration. the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`. Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
## Deploy by hand ### Deploy by hand
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node. For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
In this case, you do not need to build the complete container image, and can build just the plugin. In this case, you do not need to build the complete container image, and can build just the plugin.
### Build the plugin #### Build the plugin
First we build the plugin: First we build the plugin:
@ -148,7 +148,7 @@ $ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make gpu_plugin $ make gpu_plugin
``` ```
### Run the plugin as administrator #### Run the plugin as administrator
Now we can run the plugin directly on the node: Now we can run the plugin directly on the node:
@ -158,7 +158,7 @@ device-plugin start server at: /var/lib/kubelet/device-plugins/gpu.intel.com-i91
device-plugin registered device-plugin registered
``` ```
## Verify plugin registration ### Verify plugin registration
You can verify the plugin has been registered with the expected nodes by searching for the relevant You can verify the plugin has been registered with the expected nodes by searching for the relevant
resource allocation status on the nodes: resource allocation status on the nodes:
@ -169,7 +169,7 @@ master
i915: 1 i915: 1
``` ```
## Testing the plugin ### Testing the plugin
We can test the plugin is working by deploying the provided example OpenCL image with FFT offload enabled. We can test the plugin is working by deploying the provided example OpenCL image with FFT offload enabled.
@ -225,4 +225,4 @@ We can test the plugin is working by deploying the provided example OpenCL image
Type Reason Age From Message Type Reason Age From Message
---- ------ ---- ---- ------- ---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 Insufficient gpu.intel.com/i915. Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 Insufficient gpu.intel.com/i915.
``` ```

View File

@ -1,18 +1,18 @@
# Intel One Operator for Device Plugins # Intel One Operator for Device Plugins
# Table of Contents Table of Contents
* [Introduction](#introduction) * [Introduction](#introduction)
* [Installation](#installation) * [Installation](#installation)
# Introduction ## Introduction
This One Operator is a Kubernetes custom controller whose goal is to serve the This One Operator is a Kubernetes custom controller whose goal is to serve the
installation and lifecycle management of Intel device plugins for Kubernetes. installation and lifecycle management of Intel device plugins for Kubernetes.
It provides a single point of control for GPU, QAT and FPGA devices to a cluster It provides a single point of control for GPU, QAT and FPGA devices to a cluster
administrators. administrators.
# Installation ## Installation
The operator depends on [cert-manager](https://cert-manager.io/) running in the cluster. The operator depends on [cert-manager](https://cert-manager.io/) running in the cluster.
To install it run: To install it run:
@ -71,4 +71,4 @@ $ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/
``` ```
Now you can deploy the device plugins by creating corresponding custom resources. Now you can deploy the device plugins by creating corresponding custom resources.
The samples for them are available [here](../../deployments/operator/samples/). The samples for them are available [here](/deployments/operator/samples/).

View File

@ -1,6 +1,6 @@
# Intel QuickAssist Technology (QAT) device plugin for Kubernetes # Intel QuickAssist Technology (QAT) device plugin for Kubernetes
# Table of Contents Table of Contents
* [Introduction](#introduction) * [Introduction](#introduction)
* [Modes and Configuration options](#modes-and-configuration-options) * [Modes and Configuration options](#modes-and-configuration-options)
@ -26,7 +26,7 @@
* [OpenSSL QAT demo](#openssl-qat-demo) * [OpenSSL QAT demo](#openssl-qat-demo)
* [Checking for hardware](#checking-for-hardware) * [Checking for hardware](#checking-for-hardware)
# Introduction ## Introduction
This Intel QAT device plugin provides support for Intel QAT devices under Kubernetes. This Intel QAT device plugin provides support for Intel QAT devices under Kubernetes.
The supported devices are determined by the VF device drivers available in your Linux The supported devices are determined by the VF device drivers available in your Linux
@ -44,7 +44,7 @@ Demonstrations are provided utilising [DPDK](https://doc.dpdk.org/) and [OpenSSL
[Kata Containers](https://katacontainers.io/) QAT integration is documented in the [Kata Containers](https://katacontainers.io/) QAT integration is documented in the
[Kata Containers documentation repository][6]. [Kata Containers documentation repository][6].
## Modes and Configuration options ### Modes and Configuration options
The QAT plugin can take a number of command line arguments, summarised in the following table: The QAT plugin can take a number of command line arguments, summarised in the following table:
@ -58,9 +58,9 @@ The QAT plugin can take a number of command line arguments, summarised in the fo
The plugin also accepts a number of other arguments related to logging. Please use the `-h` option to see The plugin also accepts a number of other arguments related to logging. Please use the `-h` option to see
the complete list of logging related options. the complete list of logging related options.
The example [DaemonSet YAML](../../deployments/qat_plugin/base/intel-qat-plugin.yaml) passes a number of these The example [DaemonSet YAML](/deployments/qat_plugin/base/intel-qat-plugin.yaml) passes a number of these
arguments, and takes its default values from the arguments, and takes its default values from the
[QAT default ConfigMap](../../deployments/qat_plugin/base/intel-qat-plugin-config.yaml). The following [QAT default ConfigMap](/deployments/qat_plugin/base/intel-qat-plugin-config.yaml). The following
table summarises the defaults: table summarises the defaults:
| Argument | Variable | Default setting | Explanation | | Argument | Variable | Default setting | Explanation |
@ -86,13 +86,13 @@ The `kerneldrv` mode does not guarantee full device isolation between containers
and therefore it's not recommended. This mode will be deprecated and removed once `libqat` and therefore it's not recommended. This mode will be deprecated and removed once `libqat`
implements non-UIO based device access. implements non-UIO based device access.
# Installation ## Installation
The below sections cover how to obtain, build and install this component. The below sections cover how to obtain, build and install this component.
The component can be installed either using a DaemonSet or running 'by hand' on each node. The component can be installed either using a DaemonSet or running 'by hand' on each node.
## Prerequisites ### Prerequisites
The component has the same basic dependancies as the The component has the same basic dependancies as the
[generic plugin framework dependencies](../../README.md#about). [generic plugin framework dependencies](../../README.md#about).
@ -107,7 +107,7 @@ are available via two methods. One of them must be installed and enabled:
The demonstrations have their own requirements, listed in their own specific sections. The demonstrations have their own requirements, listed in their own specific sections.
## Pre-built image ### Pre-built image
[Pre-built images](https://hub.docker.com/r/intel/intel-qat-plugin) [Pre-built images](https://hub.docker.com/r/intel/intel-qat-plugin)
of this component are available on the Docker hub. These images are automatically built and uploaded of this component are available on the Docker hub. These images are automatically built and uploaded
@ -134,17 +134,17 @@ tag by default. If you do not build your own local images, your Kubernetes clust
the devel images from the Docker hub by default. the devel images from the Docker hub by default.
To use the release tagged versions of the images, edit the To use the release tagged versions of the images, edit the
[YAML deployment files](../../deployments/qat_plugin/base/) [YAML deployment files](/deployments/qat_plugin/base/)
appropriately. appropriately.
## Getting the source code ### Getting the source code
```bash ```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes $ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC} $ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
``` ```
## Verify node kubelet config ### Verify node kubelet config
Every node that will be running the plugin must have the Every node that will be running the plugin must have the
[kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) [kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
@ -155,17 +155,17 @@ $ ls /var/lib/kubelet/device-plugins/kubelet.sock
/var/lib/kubelet/device-plugins/kubelet.sock /var/lib/kubelet/device-plugins/kubelet.sock
``` ```
## Deploying as a DaemonSet ### Deploying as a DaemonSet
To deploy the plugin as a DaemonSet, you first need to build a container image for the plugin and To deploy the plugin as a DaemonSet, you first need to build a container image for the plugin and
ensure that is visible to your nodes. If you do not build your own plugin, your cluster may pull ensure that is visible to your nodes. If you do not build your own plugin, your cluster may pull
the image from the pre-built Docker Hub images, depending on your configuration. the image from the pre-built Docker Hub images, depending on your configuration.
### Build the plugin image #### Build the plugin image
The following will use `docker` to build a local container image called `intel/intel-qat-plugin` The following will use `docker` to build a local container image called `intel/intel-qat-plugin`
with the tag `devel`. The image build tool can be changed from the default docker by setting the with the tag `devel`. The image build tool can be changed from the default docker by setting the
`BUILDER` argument to the [Makefile](../../Makefile). `BUILDER` argument to the [Makefile](/Makefile).
```bash ```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC} $ cd ${INTEL_DEVICE_PLUGINS_SRC}
@ -177,11 +177,11 @@ Successfully tagged intel/intel-qat-plugin:devel
> **Note**: `kerneldrv` mode is excluded from the build by default. Add `EXTRA_BUILD_ARGS="--build-arg TAGS_KERNELDRV=kerneldrv"` to `make` > **Note**: `kerneldrv` mode is excluded from the build by default. Add `EXTRA_BUILD_ARGS="--build-arg TAGS_KERNELDRV=kerneldrv"` to `make`
> to get `kerneldrv` functionality added to the build. > to get `kerneldrv` functionality added to the build.
### Deploy the DaemonSet #### Deploy the DaemonSet
Deploying the plugin involves first the deployment of a Deploying the plugin involves first the deployment of a
[ConfigMap](../../deployments/qat_plugin/base/intel-qat-plugin-config.yaml) and the [ConfigMap](/deployments/qat_plugin/base/intel-qat-plugin-config.yaml) and the
[DaemonSet YAML](../../deployments/qat_plugin/base/intel-qat-plugin.yaml). [DaemonSet YAML](/deployments/qat_plugin/base/intel-qat-plugin.yaml).
There is a kustomization for deploying both: There is a kustomization for deploying both:
```bash ```bash
@ -205,7 +205,7 @@ $ kubectl create -f ${INTEL_DEVICE_PLUGINS_SRC}/deployments/qat_plugin/base/inte
> socket creation and kubelet registration. Furthermore, the deployments `securityContext` must > socket creation and kubelet registration. Furthermore, the deployments `securityContext` must
> be configured with appropriate `runAsUser/runAsGroup`. > be configured with appropriate `runAsUser/runAsGroup`.
### Verify QAT device plugin is registered on master: #### Verify QAT device plugin is registered on master:
Verification of the plugin deployment and detection of QAT hardware can be confirmed by Verification of the plugin deployment and detection of QAT hardware can be confirmed by
examining the resource allocations on the nodes: examining the resource allocations on the nodes:
@ -216,19 +216,19 @@ $ kubectl describe node <node name> | grep qat.intel.com/generic
qat.intel.com/generic: 10 qat.intel.com/generic: 10
``` ```
## Deploying by hand ### Deploying by hand
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node. For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
In this case, you do not need to build the complete container image, and can build just the plugin. In this case, you do not need to build the complete container image, and can build just the plugin.
### Build QAT device plugin #### Build QAT device plugin
```bash ```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC} $ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make qat_plugin $ make qat_plugin
``` ```
### Deploy QAT plugin #### Deploy QAT plugin
Deploy the plugin on a node by running it as `root`. The below is just an example - modify the Deploy the plugin on a node by running it as `root`. The below is just an example - modify the
paramaters as necessary for your setup: paramaters as necessary for your setup:
@ -254,12 +254,12 @@ device-plugin registered
ListAndWatch: Sending device response ListAndWatch: Sending device response
``` ```
## QAT device plugin Demos ### QAT device plugin Demos
The below sections cover `DPDK` and `OpenSSL` demos, both of which utilise the The below sections cover `DPDK` and `OpenSSL` demos, both of which utilise the
QAT device plugin under Kubernetes. QAT device plugin under Kubernetes.
### DPDK QAT demos #### DPDK QAT demos
The Data Plane Development Kit (DPDK) QAT demos use DPDK The Data Plane Development Kit (DPDK) QAT demos use DPDK
[crypto-perf](https://doc.dpdk.org/guides/tools/cryptoperf.html) and [crypto-perf](https://doc.dpdk.org/guides/tools/cryptoperf.html) and
@ -267,14 +267,14 @@ The Data Plane Development Kit (DPDK) QAT demos use DPDK
DPDK QAT Poll-Mode Drivers (PMD). For more information on the tools' parameters, refer to the DPDK QAT Poll-Mode Drivers (PMD). For more information on the tools' parameters, refer to the
website links. website links.
#### DPDK Prerequisites ##### DPDK Prerequisites
For the DPDK QAT demos to work, the DPDK drivers must be loaded and configured. For the DPDK QAT demos to work, the DPDK drivers must be loaded and configured.
For more information, refer to: For more information, refer to:
[DPDK Getting Started Guide for Linux](https://doc.dpdk.org/guides/linux_gsg/index.html) and [DPDK Getting Started Guide for Linux](https://doc.dpdk.org/guides/linux_gsg/index.html) and
[DPDK Getting Started Guide, Linux Drivers section](http://dpdk.org/doc/guides/linux_gsg/linux_drivers.html) [DPDK Getting Started Guide, Linux Drivers section](http://dpdk.org/doc/guides/linux_gsg/linux_drivers.html)
#### Build the image ##### Build the image
The demo uses a container image. You can either use the The demo uses a container image. You can either use the
[pre-built image from the Docker Hub](https://hub.docker.com/r/intel/crypto-perf), or build your own local copy. [pre-built image from the Docker Hub](https://hub.docker.com/r/intel/crypto-perf), or build your own local copy.
@ -288,7 +288,7 @@ $ ./build-image.sh crypto-perf
Successfully tagged crypto-perf:devel Successfully tagged crypto-perf:devel
``` ```
#### Deploy the pod ##### Deploy the pod
In the pod specification file, add container resource request and limit. In the pod specification file, add container resource request and limit.
For example, `qat.intel.com/generic: <number of devices>` for a container requesting QAT devices. For example, `qat.intel.com/generic: <number of devices>` for a container requesting QAT devices.
@ -307,7 +307,7 @@ $ kubectl get pods
> **Note**: The deployment example above uses [kustomize](https://github.com/kubernetes-sigs/kustomize) > **Note**: The deployment example above uses [kustomize](https://github.com/kubernetes-sigs/kustomize)
> that is available in kubectl since Kubernetes v1.14 release. > that is available in kubectl since Kubernetes v1.14 release.
#### Manual test run ##### Manual test run
Manually execute the `dpdk-test-crypto-perf` application to review the logs: Manually execute the `dpdk-test-crypto-perf` application to review the logs:
@ -324,7 +324,7 @@ $ dpdk-test-crypto-perf -l 6-7 -w $QAT1 \
> **Note**: Adapt the `.so` versions to what the DPDK version in the container provides. > **Note**: Adapt the `.so` versions to what the DPDK version in the container provides.
#### Automated test run ##### Automated test run
It is also possible to deploy and run `crypto-perf` using the following It is also possible to deploy and run `crypto-perf` using the following
`kustomize` overlays: `kustomize` overlays:
@ -339,12 +339,12 @@ $ kubectl logs qat-dpdk-test-compress-perf-tc1
> **Note**: for `test-crypto1` and `test-compress1` to work, the cluster must enable > **Note**: for `test-crypto1` and `test-compress1` to work, the cluster must enable
[Kubernetes CPU manager's](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/) `static` policy. [Kubernetes CPU manager's](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/) `static` policy.
### OpenSSL QAT demo #### OpenSSL QAT demo
Please refer to the [Kata Containers documentation][8] for details on the OpenSSL Please refer to the [Kata Containers documentation][8] for details on the OpenSSL
QAT acceleration demo. QAT acceleration demo.
# Checking for hardware ## Checking for hardware
In order to utilise the QAT device plugin, QuickAssist SR-IOV virtual functions must be configured. In order to utilise the QAT device plugin, QuickAssist SR-IOV virtual functions must be configured.
You can verify this on your nodes by checking for the relevant PCI identifiers: You can verify this on your nodes by checking for the relevant PCI identifiers:
@ -359,4 +359,4 @@ for i in 0442 0443 37c9 19e3; do lspci -d 8086:$i; done
[6]:https://github.com/kata-containers/documentation/blob/master/use-cases/using-Intel-QAT-and-kata.md [6]:https://github.com/kata-containers/documentation/blob/master/use-cases/using-Intel-QAT-and-kata.md
[7]:https://01.org/sites/default/files/downloads//336210-009qatswprogrammersguide.pdfhttps://01.org/sites/default/files/downloads//336210-009qatswprogrammersguide.pdf [7]:https://01.org/sites/default/files/downloads//336210-009qatswprogrammersguide.pdfhttps://01.org/sites/default/files/downloads//336210-009qatswprogrammersguide.pdf
[8]:https://github.com/kata-containers/documentation/blob/master/use-cases/using-Intel-QAT-and-kata.md#build-openssl-intel-qat-engine-container [8]:https://github.com/kata-containers/documentation/blob/master/use-cases/using-Intel-QAT-and-kata.md#build-openssl-intel-qat-engine-container
[9]:https://01.org/sites/default/files/downloads/intelr-quickassist-technology/336212qatswgettingstartedguiderev003.pdf [9]:https://01.org/sites/default/files/downloads/intelr-quickassist-technology/336212qatswgettingstartedguiderev003.pdf

View File

@ -1,6 +1,6 @@
# Intel Software Guard Extensions (SGX) device plugin for Kubernetes # Intel Software Guard Extensions (SGX) device plugin for Kubernetes
# Table of Contents Contents
* [Introduction](#introduction) * [Introduction](#introduction)
* [Installation](#installation) * [Installation](#installation)
@ -15,7 +15,7 @@
* [Build SGX device plugin](#build-sgx-device-plugin) * [Build SGX device plugin](#build-sgx-device-plugin)
* [Deploy SGX plugin](#deploy-sgx-plugin) * [Deploy SGX plugin](#deploy-sgx-plugin)
# Introduction ## Introduction
**Note:** The work is still WIP. The SGX device plugin can be tested to run simple enclaves **Note:** The work is still WIP. The SGX device plugin can be tested to run simple enclaves
but the full e2e deployment (including the SGX remote attestation) is not yet finished. See but the full e2e deployment (including the SGX remote attestation) is not yet finished. See
@ -23,7 +23,7 @@ the open issues for details.
This Intel SGX device plugin provides support for Intel SGX TEE under Kubernetes. This Intel SGX device plugin provides support for Intel SGX TEE under Kubernetes.
## Modes and Configuration options ### Modes and Configuration options
The SGX plugin can take a number of command line arguments, summarised in the following table: The SGX plugin can take a number of command line arguments, summarised in the following table:
@ -35,13 +35,13 @@ The SGX plugin can take a number of command line arguments, summarised in the fo
The plugin also accepts a number of other arguments related to logging. Please use the `-h` option to see The plugin also accepts a number of other arguments related to logging. Please use the `-h` option to see
the complete list of logging related options. the complete list of logging related options.
# Installation ## Installation
The below sections cover how to obtain, build and install this component. The below sections cover how to obtain, build and install this component.
The component can be installed either using a DaemonSet or running 'by hand' on each node. The component can be installed either using a DaemonSet or running 'by hand' on each node.
## Prerequisites ### Prerequisites
The component has the same basic dependancies as the The component has the same basic dependancies as the
[generic plugin framework dependencies](../../README.md#about). [generic plugin framework dependencies](../../README.md#about).
@ -49,14 +49,14 @@ The component has the same basic dependancies as the
The SGX plugin requires Linux Kernel SGX drivers to be available. These drivers The SGX plugin requires Linux Kernel SGX drivers to be available. These drivers
are currently available via RFC patches on Linux Kernel Mailing List. are currently available via RFC patches on Linux Kernel Mailing List.
## Getting the source code ### Getting the source code
```bash ```bash
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes $ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC} $ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
``` ```
## Verify node kubelet config ### Verify node kubelet config
Every node that will be running the plugin must have the Every node that will be running the plugin must have the
[kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) [kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
@ -67,16 +67,16 @@ $ ls /var/lib/kubelet/device-plugins/kubelet.sock
/var/lib/kubelet/device-plugins/kubelet.sock /var/lib/kubelet/device-plugins/kubelet.sock
``` ```
## Deploying as a DaemonSet ### Deploying as a DaemonSet
To deploy the plugin as a DaemonSet, you first need to build a container image for the plugin and To deploy the plugin as a DaemonSet, you first need to build a container image for the plugin and
ensure that is visible to your nodes. ensure that is visible to your nodes.
### Build the plugin and EPC source images #### Build the plugin and EPC source images
The following will use `docker` to build a local container images called `intel/intel-sgx-plugin` The following will use `docker` to build a local container images called `intel/intel-sgx-plugin`
and `intel/intel-sgx-initcontainer` with the tag `devel`. The image build tool can be changed from the and `intel/intel-sgx-initcontainer` with the tag `devel`. The image build tool can be changed from the
default docker by setting the `BUILDER` argument to the [Makefile](../../Makefile). default docker by setting the `BUILDER` argument to the [Makefile](/Makefile).
```bash ```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC} $ cd ${INTEL_DEVICE_PLUGINS_SRC}
@ -88,11 +88,11 @@ $ make intel-sgx-initcontainer
Successfully tagged intel/intel-sgx-initcontainer:devel Successfully tagged intel/intel-sgx-initcontainer:devel
``` ```
### Deploy the DaemonSet #### Deploy the DaemonSet
Deploying the plugin involves the deployment of a Deploying the plugin involves the deployment of a
[NFD EPC Source InitContainer Job](../../deployments/sgx_plugin/base/intel-sgx-hookinstall.yaml) the [NFD EPC Source InitContainer Job](/deployments/sgx_plugin/base/intel-sgx-hookinstall.yaml) the
[DaemonSet YAML](../../deployments/sgx_plugin/base/intel-sgx-plugin.yaml), and node-feature-discovery [DaemonSet YAML](/deployments/sgx_plugin/base/intel-sgx-plugin.yaml), and node-feature-discovery
with the necessary configuration. with the necessary configuration.
There is a kustomization for deploying everything: There is a kustomization for deploying everything:
@ -100,7 +100,7 @@ There is a kustomization for deploying everything:
$ kubectl apply -k ${INTEL_DEVICE_PLUGINS_SRC}/deployments/sgx_plugin/overlays/nfd $ kubectl apply -k ${INTEL_DEVICE_PLUGINS_SRC}/deployments/sgx_plugin/overlays/nfd
``` ```
### Verify SGX device plugin is registered on master: #### Verify SGX device plugin is registered on master:
Verification of the plugin deployment and detection of SGX hardware can be confirmed by Verification of the plugin deployment and detection of SGX hardware can be confirmed by
examining the resource allocations on the nodes: examining the resource allocations on the nodes:
@ -119,19 +119,19 @@ $ kubectl describe node <node name> | grep sgx.intel.com
sgx.intel.com/provision 1 1 sgx.intel.com/provision 1 1
``` ```
## Deploying by hand ### Deploying by hand
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node. For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
In this case, you do not need to build the complete container image, and can build just the plugin. In this case, you do not need to build the complete container image, and can build just the plugin.
### Build SGX device plugin #### Build SGX device plugin
```bash ```bash
$ cd ${INTEL_DEVICE_PLUGINS_SRC} $ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make sgx_plugin $ make sgx_plugin
``` ```
### Deploy SGX plugin #### Deploy SGX plugin
Deploy the plugin on a node by running it as `root`. The below is just an example - modify the Deploy the plugin on a node by running it as `root`. The below is just an example - modify the
paramaters as necessary for your setup: paramaters as necessary for your setup:
@ -142,4 +142,4 @@ I0626 20:33:01.414446 964346 server.go:219] Start server for provision at: /var
I0626 20:33:01.414640 964346 server.go:219] Start server for enclave at: /var/lib/kubelet/device-plugins/sgx.intel.com-enclave.sock I0626 20:33:01.414640 964346 server.go:219] Start server for enclave at: /var/lib/kubelet/device-plugins/sgx.intel.com-enclave.sock
I0626 20:33:01.417315 964346 server.go:237] Device plugin for provision registered I0626 20:33:01.417315 964346 server.go:237] Device plugin for provision registered
I0626 20:33:01.417748 964346 server.go:237] Device plugin for enclave registered I0626 20:33:01.417748 964346 server.go:237] Device plugin for enclave registered
``` ```

View File

@ -1,6 +1,6 @@
# Intel VPU device plugin for Kubernetes # Intel VPU device plugin for Kubernetes
# Table of Contents Table of Contents
* [Introduction](#introduction) * [Introduction](#introduction)
* [Installation](#installation) * [Installation](#installation)
@ -18,7 +18,7 @@
* [Create a job running unit tests off the local Docker image](#create-a-job-running-unit-tests-off-the-local-docker-image) * [Create a job running unit tests off the local Docker image](#create-a-job-running-unit-tests-off-the-local-docker-image)
* [Review the job logs](#review-the-job-logs) * [Review the job logs](#review-the-job-logs)
# Introduction ## Introduction
The VPU device plugin supports below cards: The VPU device plugin supports below cards:
@ -38,13 +38,13 @@ This card has:
> To get VCAC-A or Mustang card running hddl, please refer to: > To get VCAC-A or Mustang card running hddl, please refer to:
> https://github.com/OpenVisualCloud/Dockerfiles/blob/master/VCAC-A/script/setup_hddl.sh > https://github.com/OpenVisualCloud/Dockerfiles/blob/master/VCAC-A/script/setup_hddl.sh
# Installation ## Installation
The following sections detail how to obtain, build, deploy and test the VPU device plugin. The following sections detail how to obtain, build, deploy and test the VPU device plugin.
Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis. Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis.
## Getting the source code ### Getting the source code
> **Note:** It is presumed you have a valid and configured [golang](https://golang.org/) environment > **Note:** It is presumed you have a valid and configured [golang](https://golang.org/) environment
> that meets the minimum required version. > that meets the minimum required version.
@ -54,7 +54,7 @@ $ mkdir -p $(go env GOPATH)/src/github.com/intel
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes $ git clone https://github.com/intel/intel-device-plugins-for-kubernetes $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
``` ```
## Verify node kubelet config ### Verify node kubelet config
Every node that will be running the vpu plugin must have the Every node that will be running the vpu plugin must have the
[kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) [kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
@ -65,18 +65,18 @@ $ ls /var/lib/kubelet/device-plugins/kubelet.sock
/var/lib/kubelet/device-plugins/kubelet.sock /var/lib/kubelet/device-plugins/kubelet.sock
``` ```
## Deploying as a DaemonSet ### Deploying as a DaemonSet
To deploy the vpu plugin as a daemonset, you first need to build a container image for the To deploy the vpu plugin as a daemonset, you first need to build a container image for the
plugin and ensure that is visible to your nodes. plugin and ensure that is visible to your nodes.
### Build the plugin image #### Build the plugin image
The following will use `docker` to build a local container image called The following will use `docker` to build a local container image called
`intel/intel-vpu-plugin` with the tag `devel`. `intel/intel-vpu-plugin` with the tag `devel`.
The image build tool can be changed from the default `docker` by setting the `BUILDER` argument The image build tool can be changed from the default `docker` by setting the `BUILDER` argument
to the [`Makefile`](Makefile). to the [`Makefile`](/Makefile).
```bash ```bash
$ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes $ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
@ -85,9 +85,9 @@ $ make intel-vpu-plugin
Successfully tagged intel/intel-vpu-plugin:devel Successfully tagged intel/intel-vpu-plugin:devel
``` ```
### Deploy plugin DaemonSet #### Deploy plugin DaemonSet
You can then use the [example DaemonSet YAML](../../deployments/vpu_plugin/base/intel-vpu-plugin.yaml) You can then use the [example DaemonSet YAML](/deployments/vpu_plugin/base/intel-vpu-plugin.yaml)
file provided to deploy the plugin. The default kustomization that deploys the YAML as is: file provided to deploy the plugin. The default kustomization that deploys the YAML as is:
```bash ```bash
@ -99,12 +99,12 @@ daemonset.apps/intel-vpu-plugin created
the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration. the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`. Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
## Deploy by hand ### Deploy by hand
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node. For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
In this case, you do not need to build the complete container image, and can build just the plugin. In this case, you do not need to build the complete container image, and can build just the plugin.
### Build the plugin #### Build the plugin
First we build the plugin: First we build the plugin:
@ -115,7 +115,7 @@ $ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
$ make vpu_plugin $ make vpu_plugin
``` ```
### Run the plugin as administrator #### Run the plugin as administrator
Now we can run the plugin directly on the node: Now we can run the plugin directly on the node:
@ -124,7 +124,7 @@ $ sudo $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
VPU device plugin started VPU device plugin started
``` ```
## Verify plugin registration ### Verify plugin registration
You can verify the plugin has been registered with the expected nodes by searching for the relevant You can verify the plugin has been registered with the expected nodes by searching for the relevant
resource allocation status on the nodes: resource allocation status on the nodes:
@ -135,11 +135,11 @@ vcaanode00
hddl: 12 hddl: 12
``` ```
## Testing the plugin ### Testing the plugin
We can test the plugin is working by deploying the provided example OpenVINO image with HDDL plugin enabled. We can test the plugin is working by deploying the provided example OpenVINO image with HDDL plugin enabled.
### Build a Docker image with an classification example #### Build a Docker image with an classification example
```bash ```bash
$ cd demo $ cd demo
@ -148,7 +148,7 @@ $ ./build-image.sh ubuntu-demo-openvino
Successfully tagged ubuntu-demo-openvino:devel Successfully tagged ubuntu-demo-openvino:devel
``` ```
### Create a job running unit tests off the local Docker image #### Create a job running unit tests off the local Docker image
```bash ```bash
$ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes $ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
@ -156,7 +156,7 @@ $ kubectl apply -f demo/intelvpu-job.yaml
job.batch/intelvpu-demo-job created job.batch/intelvpu-demo-job created
``` ```
### Review the job logs #### Review the job logs
```bash ```bash
$ kubectl get pods | fgrep intelvpu $ kubectl get pods | fgrep intelvpu
@ -251,4 +251,4 @@ Events:
Type Reason Age From Message Type Reason Age From Message
---- ------ ---- ---- ------- ---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 Insufficient vpu.intel.com/hddl. Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 Insufficient vpu.intel.com/hddl.
``` ```

View File

@ -1,6 +1,6 @@
# Intel Device Plugin Demo for Kubernetes # Demo
## Table of Contents Table of Contents
- [Demo overview](#demo-overview) - [Demo overview](#demo-overview)
- [Intel® GPU Device Plugin demo video](#intel-gpu-device-plugin-demo-video) - [Intel® GPU Device Plugin demo video](#intel-gpu-device-plugin-demo-video)
@ -161,4 +161,4 @@ Intel® QAT Device Plugin deployment
### Screencast ### Screencast
Intel® QAT Device Plugin with DPDK: Intel® QAT Device Plugin with DPDK:
[<img src="https://asciinema.org/a/PoWOz4q2lX4AF4K9A2AV1RtSA.svg" width=700>](https://asciinema.org/a/PoWOz4q2lX4AF4K9A2AV1RtSA) [<img src="https://asciinema.org/a/PoWOz4q2lX4AF4K9A2AV1RtSA.svg" width=700>](https://asciinema.org/a/PoWOz4q2lX4AF4K9A2AV1RtSA)