mirror of
https://github.com/intel/intel-device-plugins-for-kubernetes.git
synced 2025-06-03 03:59:37 +00:00
Documentation: Fixes broken links and standardizes headers.
Signed-off-by: Kevin Putnam <kevin.putnam@intel.com>
This commit is contained in:
parent
429770c486
commit
1d149ffee6
15
DEVEL.md
15
DEVEL.md
@ -1,5 +1,7 @@
|
||||
How to develop simple device plugins
|
||||
====================================
|
||||
# Development
|
||||
|
||||
|
||||
## How to develop simple device plugins
|
||||
|
||||
To create a simple device plugin without the hassle of developing your own gRPC
|
||||
server, you can use a package included in this repository called
|
||||
@ -62,8 +64,7 @@ Optionally, your device plugin may also implement the
|
||||
before they are sent to `kubelet`. To see an example, refer to the FPGA
|
||||
plugin which implements this interface to annotate its responses.
|
||||
|
||||
Logging
|
||||
-------
|
||||
### Logging
|
||||
|
||||
The framework uses [`klog`](https://github.com/kubernetes/klog) as its logging
|
||||
framework. It is encouraged for plugins to also use `klog` to maintain uniformity
|
||||
@ -84,8 +85,7 @@ The default is to not log `Info()` calls. This can be changed using the plugin c
|
||||
line `-v` parameter. The additional annotations prepended to log lines by 'klog' can be disabled
|
||||
with the `-skip_headers` option.
|
||||
|
||||
Error Conventions
|
||||
-----------------
|
||||
### Error Conventions
|
||||
|
||||
The framework has a convention for producing and logging errors. Ideally plugins will also adhere
|
||||
to the convention.
|
||||
@ -122,8 +122,7 @@ Otherwise, they can be logged as simple values:
|
||||
klog.Warningf("Example of a warning due to an external error: %v", err)
|
||||
```
|
||||
|
||||
How to build against a newer version of Kubernetes
|
||||
==================================================
|
||||
## How to build against a newer version of Kubernetes
|
||||
|
||||
First you need to update module dependencies. The easiest way is to use the
|
||||
script copied from https://github.com/kubernetes/kubernetes/issues/79384#issuecomment-521493597:
|
||||
|
17
README.md
17
README.md
@ -1,11 +1,14 @@
|
||||
# Intel® Device Plugins for Kubernetes
|
||||
# Overview
|
||||
[](https://github.com/intel/intel-device-plugins-for-kubernetes/actions?query=workflow%3ACI)
|
||||
[](https://goreportcard.com/report/github.com/intel/intel-device-plugins-for-kubernetes)
|
||||
[](https://godoc.org/github.com/intel/intel-device-plugins-for-kubernetes/pkg/deviceplugin)
|
||||
|
||||
## Table of Contents
|
||||
This repository contains a framework for developing plugins for the Kubernetes
|
||||
[device plugins framework](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/),
|
||||
along with a number of device plugin implementations utilising that framework.
|
||||
|
||||
Table of Contents
|
||||
|
||||
* [About](#about)
|
||||
* [Prerequisites](#prerequisites)
|
||||
* [Plugins](#plugins)
|
||||
* [GPU device plugin](#gpu-device-plugin)
|
||||
@ -22,12 +25,6 @@
|
||||
* [Supported Kubernetes versions](#supported-kubernetes-versions)
|
||||
* [Related code](#related-code)
|
||||
|
||||
## About
|
||||
|
||||
This repository contains a framework for developing plugins for the Kubernetes
|
||||
[device plugins framework](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/),
|
||||
along with a number of device plugin implementations utilising that framework.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Prerequisites for building and running these device plugins include:
|
||||
@ -212,7 +209,7 @@ $ KUBEBUILDER_ASSETS=${HOME}/work/kubebuilder-assets make envtest
|
||||
|
||||
## Supported Kubernetes versions
|
||||
|
||||
Releases are made under the github [releases area](../../releases). Supported releases and
|
||||
Releases are made under the github [releases area](https://github.com/intel/intel-device-plugins-for-kubernetes/releases). Supported releases and
|
||||
matching Kubernetes versions are listed below:
|
||||
|
||||
| Branch | Kubernetes branch/version |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Intel FPGA admission controller for Kubernetes
|
||||
|
||||
# Table of Contents
|
||||
Table of Contents
|
||||
|
||||
* [Introduction](#introduction)
|
||||
* [Dependencies](#dependencies)
|
||||
@ -10,7 +10,7 @@
|
||||
* [Mappings](#mappings)
|
||||
* [Next steps](#next-steps)
|
||||
|
||||
# Introduction
|
||||
## Introduction
|
||||
|
||||
The FPGA admission controller is one of the components used to add support for Intel FPGA
|
||||
devices to Kubernetes.
|
||||
@ -31,7 +31,7 @@ The admission controller also keeps the user from bypassing namespaced mapping r
|
||||
by denying admission of any pods that are trying to use internal knowledge of InterfaceID or
|
||||
Bitstream ID environment variables used by the prestart hook.
|
||||
|
||||
# Dependencies
|
||||
## Dependencies
|
||||
|
||||
This component is one of a set of components that work together. You may also want to
|
||||
install the following:
|
||||
@ -42,12 +42,12 @@ install the following:
|
||||
All components have the same basic dependencies as the
|
||||
[generic plugin framework dependencies](../../README.md#about)
|
||||
|
||||
# Installation
|
||||
## Installation
|
||||
|
||||
The following sections detail how to obtain, build and deploy the admission
|
||||
controller webhook plugin.
|
||||
|
||||
## Pre-requisites
|
||||
### Pre-requisites
|
||||
|
||||
The webhook depends on having [cert-manager](https://cert-manager.io/)
|
||||
installed:
|
||||
@ -89,7 +89,7 @@ spec:
|
||||
...
|
||||
```
|
||||
|
||||
## Deployment
|
||||
### Deployment
|
||||
|
||||
To deploy the webhook, run
|
||||
|
||||
@ -108,7 +108,7 @@ issuer.cert-manager.io/intelfpgawebhook-selfsigned-issuer created
|
||||
```
|
||||
Now you can deploy your mappings.
|
||||
|
||||
# Mappings
|
||||
## Mappings
|
||||
|
||||
Mappings is a an essential part of the setup that gives a flexible instrument to a cluster
|
||||
administrator to manage FPGA bitstreams and to control access to them. Being a set of
|
||||
@ -148,12 +148,12 @@ bitstream to a region before the container is started.
|
||||
|
||||
Mappings of resource names are configured with objects of `AcceleratorFunction` and
|
||||
`FpgaRegion` custom resource definitions found respectively in
|
||||
[`./deployment/fpga_admissionwebhook/crd/bases/fpga.intel.com_af.yaml`](../../deployment/fpga_admissionwebhook/crd/bases/fpga.intel.com_af.yaml)
|
||||
and [`./deployment/fpga_admissionwebhook/crd/bases/fpga.intel.com_region.yaml`](../../deployment/fpga_admissionwebhook/crd/bases/fpga.intel.com_region.yaml).
|
||||
[`./deployment/fpga_admissionwebhook/crd/bases/fpga.intel.com_af.yaml`](/deployments/fpga_admissionwebhook/crd/bases/fpga.intel.com_acceleratorfunctions.yaml)
|
||||
and [`./deployment/fpga_admissionwebhook/crd/bases/fpga.intel.com_region.yaml`](/deployments/fpga_admissionwebhook/crd/bases/fpga.intel.com_fpgaregions.yaml).
|
||||
|
||||
Mappings between 'names' and 'ID's are controlled by the admission controller
|
||||
mappings collection file found in
|
||||
[`./deployments/fpga_admissionwebhook/mappings-collection.yaml`](../../deployments/fpga_admissionwebhook/mappings-collection.yaml).
|
||||
[`./deployments/fpga_admissionwebhook/mappings-collection.yaml`](/deployments/fpga_admissionwebhook/mappings-collection.yaml).
|
||||
This mappings file can be deployed with
|
||||
|
||||
```bash
|
||||
@ -163,6 +163,6 @@ $ kubectl apply -f https://raw.githubusercontent.com/intel/intel-device-plugins-
|
||||
Note that the mappings are scoped to the namespaces they were created in
|
||||
and they are applicable to pods created in the corresponding namespaces.
|
||||
|
||||
# Next steps
|
||||
## Next steps
|
||||
|
||||
Continue with [FPGA prestart CRI-O hook](../fpga_crihook/README.md).
|
@ -1,6 +1,6 @@
|
||||
# Intel FPGA prestart CRI-O webhook for Kubernetes
|
||||
|
||||
# Table of Contents
|
||||
Table of Contents
|
||||
|
||||
* [Introduction](#introduction)
|
||||
* [Dependencies](#dependencies)
|
||||
@ -9,27 +9,27 @@
|
||||
* [Building the image](#building-the-image)
|
||||
* [Configuring CRI-O](#configuring-cri-o)
|
||||
|
||||
# Introduction
|
||||
## Introduction
|
||||
|
||||
The FPGA CRI-O webhook is one of the components used to add support for Intel FPGA
|
||||
devices to Kubernetes.
|
||||
|
||||
The FPGA prestart CRI-O hook is triggered by container annotations, such as set by the
|
||||
[FPGA device plugin](../fpga_plugin). It performs discovery of the requested FPGA
|
||||
[FPGA device plugin](../fpga_plugin/README.md). It performs discovery of the requested FPGA
|
||||
function bitstream and then programs FPGA devices based on the environment variables
|
||||
in the workload description.
|
||||
|
||||
The CRI-O prestart hook is only *required* when the
|
||||
[FPGA admission webhook](../fpga_admissionwebhook) is configured for orchestration
|
||||
[FPGA admission webhook](../fpga_admissionwebhook/README.md) is configured for orchestration
|
||||
programmed mode, and is benign (un-used) otherwise.
|
||||
|
||||
> **Note:** The fpga CRI-O webhook is usually installed by the same DaemonSet as the
|
||||
> FPGA device plugin. If building and installing the CRI-O webhook by hand, it is
|
||||
> recommended you reference the
|
||||
> [fpga plugin DaemonSet YAML](../../deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml) for
|
||||
> [fpga plugin DaemonSet YAML](/deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml ) for
|
||||
> more details.
|
||||
|
||||
# Dependencies
|
||||
## Dependencies
|
||||
|
||||
This component is one of a set of components that work together. You may also want to
|
||||
install the following:
|
||||
@ -40,19 +40,19 @@ install the following:
|
||||
All components have the same basic dependencies as the
|
||||
[generic plugin framework dependencies](../../README.md#about)
|
||||
|
||||
# Building
|
||||
## Building
|
||||
|
||||
The following sections detail how to obtain, build and deploy the CRI-O
|
||||
prestart hook.
|
||||
|
||||
## Getting the source code
|
||||
### Getting the source code
|
||||
|
||||
```bash
|
||||
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
|
||||
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
|
||||
```
|
||||
|
||||
## Building the image
|
||||
### Building the image
|
||||
|
||||
```bash
|
||||
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
|
||||
@ -61,7 +61,7 @@ $ make intel-fpga-initcontainer
|
||||
Successfully tagged intel/intel-fpga-initcontainer:devel
|
||||
```
|
||||
|
||||
# Configuring CRI-O
|
||||
## Configuring CRI-O
|
||||
|
||||
Recent versions of [CRI-O](https://github.com/cri-o/cri-o) are shipped with default configuration
|
||||
file that prevents CRI-O to discover and configure hooks automatically.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Intel FPGA device plugin for Kubernetes
|
||||
|
||||
# Table of Contents
|
||||
Table of Contents
|
||||
|
||||
* [Introduction](#introduction)
|
||||
* [Component overview](#component-overview)
|
||||
@ -18,7 +18,7 @@
|
||||
* [Run FPGA device plugin in af mode](#run-fpga-device-plugin-in-af-mode)
|
||||
* [Run FPGA device plugin in region mode](#run-fpga-device-plugin-in-region-mode)
|
||||
|
||||
# Introduction
|
||||
## Introduction
|
||||
|
||||
This FPGA device plugin is part of a collection of Kubernetes components found within this
|
||||
repository that enable integration of Intel FPGA hardware into Kubernetes.
|
||||
@ -38,7 +38,7 @@ The components together implement the following features:
|
||||
- orchestration of FPGA programming
|
||||
- access control for FPGA hardware
|
||||
|
||||
# Component overview
|
||||
## Component overview
|
||||
|
||||
The following components are part of this repository, and work together to support Intel FPGAs under
|
||||
Kubernetes:
|
||||
@ -70,7 +70,7 @@ Kubernetes:
|
||||
The repository also contains an [FPGA helper tool](../fpga_tool/README.md) that may be useful during
|
||||
development, initial deployment and debugging.
|
||||
|
||||
# FPGA modes
|
||||
## FPGA modes
|
||||
|
||||
The FPGA plugin set can run in one of two modes:
|
||||
|
||||
@ -95,14 +95,14 @@ af mode:
|
||||
|
||||

|
||||
|
||||
# Installation
|
||||
## Installation
|
||||
|
||||
The below sections cover how to obtain, build and install this component.
|
||||
|
||||
Components can generally be installed either using DaemonSets or running them
|
||||
'by hand' on each node.
|
||||
|
||||
## Pre-built images
|
||||
### Pre-built images
|
||||
|
||||
Pre-built images of the components are available on the [Docker hub](https://hub.docker.com/u/intel).
|
||||
These images are automatically built and uploaded to the hub from the latest `master` branch of
|
||||
@ -123,7 +123,7 @@ The following images are available on the Docker hub:
|
||||
- [The FPGA admisson webhook](https://hub.docker.com/r/intel/intel-fpga-admissionwebhook)
|
||||
- [The FPGA CRI-O prestart hook (in the `initcontainer` image)](https://hub.docker.com/r/intel/intel-fpga-initcontainer)
|
||||
|
||||
## Dependencies
|
||||
### Dependencies
|
||||
|
||||
All components have the same basic dependencies as the
|
||||
[generic plugin framework dependencies](../../README.md#about)
|
||||
@ -136,7 +136,7 @@ major components:
|
||||
- [FPGA prestart CRI-O hook](../fpga_crihook/README.md)
|
||||
|
||||
The CRI-O hook is only *required* if `region` mode is being used, but is installed by default by the
|
||||
[FPGA plugin DaemonSet YAML](../../deployments/fpga_plugin/fpga_plugin.yaml), and is benign
|
||||
[FPGA plugin DaemonSet YAML](/deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml), and is benign
|
||||
in `af` mode.
|
||||
|
||||
If using the `af` mode, and therefore *not* using the
|
||||
@ -153,7 +153,7 @@ which is present and thus to use:
|
||||
Install this component (FPGA device plugin) first, and then follow the links
|
||||
and instructions to install the other components.
|
||||
|
||||
## Getting the source code
|
||||
### Getting the source code
|
||||
|
||||
To obtain the YAML files used for deployment, or to obtain the source tree if you intend to
|
||||
do a hand-deployment or build your own image, you will require access to the source code:
|
||||
@ -163,7 +163,7 @@ $ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
|
||||
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
|
||||
```
|
||||
|
||||
## Verify node kubelet config
|
||||
### Verify node kubelet config
|
||||
|
||||
Every node that will be running the FPGA plugin must have the
|
||||
[kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
|
||||
@ -174,7 +174,7 @@ $ ls /var/lib/kubelet/device-plugins/kubelet.sock
|
||||
/var/lib/kubelet/device-plugins/kubelet.sock
|
||||
```
|
||||
|
||||
## Deploying as a DaemonSet
|
||||
### Deploying as a DaemonSet
|
||||
|
||||
As a pre-requisite you need to have [cert-manager](https://cert-manager.io)
|
||||
up and running:
|
||||
@ -249,11 +249,11 @@ $ kubectl annotate node <node_name> 'fpga.intel.com/device-plugin-mode=af'
|
||||
```
|
||||
And restart the pods on the nodes.
|
||||
|
||||
> **Note:** The FPGA plugin [DaemonSet YAML](../../deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml)
|
||||
> also deploys the [FPGA CRI-O hook](../fpga_criohook) `initcontainer` image, but it will be
|
||||
> **Note:** The FPGA plugin [DaemonSet YAML](/deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml)
|
||||
> also deploys the [FPGA CRI-O hook](../fpga_crihook/README.md) `initcontainer` image, but it will be
|
||||
> benign (un-used) when running the FPGA plugin in `af` mode.
|
||||
|
||||
### Verify plugin registration
|
||||
#### Verify plugin registration
|
||||
|
||||
Verify the FPGA plugin has been deployed on the nodes. The below shows the output
|
||||
you can expect in `region` mode, but similar output should be expected for `af`
|
||||
@ -265,20 +265,20 @@ fpga.intel.com/region-ce48969398f05f33946d560708be108a: 1
|
||||
fpga.intel.com/region-ce48969398f05f33946d560708be108a: 1
|
||||
```
|
||||
|
||||
### Building the plugin image
|
||||
#### Building the plugin image
|
||||
|
||||
If you need to build your own image from sources, and are not using the images
|
||||
available on the Docker Hub, follow the below details.
|
||||
|
||||
> **Note:** The FPGA plugin [DaemonSet YAML](../../deployments/fpga_plugin/fpga_plugin.yaml)
|
||||
> also deploys the [FPGA CRI-O hook](../fpga_criohook) `initcontainer` image as well. You may
|
||||
> **Note:** The FPGA plugin [DaemonSet YAML](/deployments/fpga_plugin/fpga_plugin.yaml)
|
||||
> also deploys the [FPGA CRI-O hook](../fpga_crihook/README.md) `initcontainer` image as well. You may
|
||||
> also wish to build that image locally before deploying the FPGA plugin to avoid deploying
|
||||
> the Docker hub default image.
|
||||
|
||||
The following will use `docker` to build a local container image called
|
||||
`intel/intel-fpga-plugin` with the tag `devel`.
|
||||
The image build tool can be changed from the default docker by setting the `BUILDER` argument
|
||||
to the [Makefile](../../Makefile).
|
||||
to the [Makefile](/Makefile).
|
||||
|
||||
```bash
|
||||
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
|
||||
@ -290,10 +290,10 @@ Successfully tagged intel/intel-fpga-plugin:devel
|
||||
This image launches `fpga_plugin` in `af` mode by default.
|
||||
|
||||
To use your own container image, create you own kustomization overlay patching
|
||||
[`deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml`](../../deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml)
|
||||
[`deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml`](/deployments/fpga_plugin/base/intel-fpga-plugin-daemonset.yaml)
|
||||
file.
|
||||
|
||||
## Deploy by hand
|
||||
### Deploy by hand
|
||||
|
||||
For development purposes, it is sometimes convenient to deploy the plugin 'by hand'
|
||||
on a node. In this case, you do not need to build the complete container image,
|
||||
@ -303,7 +303,7 @@ and can build just the plugin.
|
||||
> to be configured or installed. It is recommended you reference the actions of the
|
||||
> DaemonSet YAML deployment for more details.
|
||||
|
||||
### Build FPGA device plugin
|
||||
#### Build FPGA device plugin
|
||||
|
||||
When deploying by hand, you only need to build the plugin itself, and not the whole
|
||||
container image:
|
||||
@ -313,7 +313,7 @@ $ cd ${INTEL_DEVICE_PLUGINS_SRC}
|
||||
$ make fpga_plugin
|
||||
```
|
||||
|
||||
### Run FPGA device plugin in af mode
|
||||
#### Run FPGA device plugin in af mode
|
||||
|
||||
```bash
|
||||
$ export KUBE_CONF=/var/run/kubernetes/admin.kubeconfig # path to kubeconfig with admin's credentials
|
||||
@ -328,7 +328,7 @@ device-plugin registered
|
||||
the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
|
||||
Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
|
||||
|
||||
### Run FPGA device plugin in region mode
|
||||
#### Run FPGA device plugin in region mode
|
||||
|
||||
```bash
|
||||
$ export KUBE_CONF=/var/run/kubernetes/admin.kubeconfig # path to kubeconfig with admin's credentials
|
||||
|
@ -1,11 +1,11 @@
|
||||
# Intel FPGA test tool
|
||||
|
||||
# Introduction
|
||||
## Introduction
|
||||
|
||||
This directory contains an FPGA test tool that can be used to locate, examine and program Intel
|
||||
FPGAs.
|
||||
|
||||
## Command line and usage
|
||||
### Command line and usage
|
||||
|
||||
The tool has the following command line arguments:
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Intel GPU device plugin for Kubernetes
|
||||
|
||||
# Table of Contents
|
||||
Table of Contents
|
||||
|
||||
* [Introduction](#introduction)
|
||||
* [Installation](#installation)
|
||||
@ -16,7 +16,7 @@
|
||||
* [Verify plugin registration](#verify-plugin-registration)
|
||||
* [Testing the plugin](#testing-the-plugin)
|
||||
|
||||
# Introduction
|
||||
## Introduction
|
||||
|
||||
The GPU device plugin for Kubernetes supports acceleration using the following Intel GPU hardware families:
|
||||
|
||||
@ -35,13 +35,13 @@ For example, the Intel Media SDK can offload video transcoding operations, and t
|
||||
The device plugin can also be used with [GVT-d](https://github.com/intel/gvt-linux/wiki/GVTd_Setup_Guide) device
|
||||
passthrough and acceleration.
|
||||
|
||||
# Installation
|
||||
## Installation
|
||||
|
||||
The following sections detail how to obtain, build, deploy and test the GPU device plugin.
|
||||
|
||||
Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis.
|
||||
|
||||
## Deploy with pre-built container image
|
||||
### Deploy with pre-built container image
|
||||
|
||||
[Pre-built images](https://hub.docker.com/r/intel/intel-gpu-plugin)
|
||||
of this component are available on the Docker hub. These images are automatically built and uploaded
|
||||
@ -71,14 +71,14 @@ daemonset.apps/intel-gpu-plugin created
|
||||
|
||||
Nothing else is needed. But if you want to deploy a customized version of the plugin read further.
|
||||
|
||||
## Getting the source code
|
||||
### Getting the source code
|
||||
|
||||
```bash
|
||||
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
|
||||
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
|
||||
```
|
||||
|
||||
## Verify node kubelet config
|
||||
### Verify node kubelet config
|
||||
|
||||
Every node that will be running the gpu plugin must have the
|
||||
[kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
|
||||
@ -89,12 +89,12 @@ $ ls /var/lib/kubelet/device-plugins/kubelet.sock
|
||||
/var/lib/kubelet/device-plugins/kubelet.sock
|
||||
```
|
||||
|
||||
## Deploying as a DaemonSet
|
||||
### Deploying as a DaemonSet
|
||||
|
||||
To deploy the gpu plugin as a daemonset, you first need to build a container image for the
|
||||
plugin and ensure that is visible to your nodes.
|
||||
|
||||
### Build the plugin image
|
||||
#### Build the plugin image
|
||||
|
||||
The following will use `docker` to build a local container image called
|
||||
`intel/intel-gpu-plugin` with the tag `devel`.
|
||||
@ -109,9 +109,9 @@ $ make intel-gpu-plugin
|
||||
Successfully tagged intel/intel-gpu-plugin:devel
|
||||
```
|
||||
|
||||
### Deploy plugin DaemonSet
|
||||
#### Deploy plugin DaemonSet
|
||||
|
||||
You can then use the [example DaemonSet YAML](../../deployments/gpu_plugin/base/intel-gpu-plugin.yaml)
|
||||
You can then use the [example DaemonSet YAML](/deployments/gpu_plugin/base/intel-gpu-plugin.yaml)
|
||||
file provided to deploy the plugin. The default kustomization that deploys the YAML as is:
|
||||
|
||||
```bash
|
||||
@ -122,7 +122,7 @@ daemonset.apps/intel-gpu-plugin created
|
||||
Alternatively, if your cluster runs
|
||||
[Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery),
|
||||
you can deploy the device plugin only on nodes with Intel GPU.
|
||||
The [nfd_labeled_nodes](../../deployments/gpu_plugin/overlays/nfd_labeled_nodes/)
|
||||
The [nfd_labeled_nodes](/deployments/gpu_plugin/overlays/nfd_labeled_nodes/)
|
||||
kustomization adds the nodeSelector to the DaemonSet:
|
||||
|
||||
```bash
|
||||
@ -134,12 +134,12 @@ daemonset.apps/intel-gpu-plugin created
|
||||
the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
|
||||
Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
|
||||
|
||||
## Deploy by hand
|
||||
### Deploy by hand
|
||||
|
||||
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
|
||||
In this case, you do not need to build the complete container image, and can build just the plugin.
|
||||
|
||||
### Build the plugin
|
||||
#### Build the plugin
|
||||
|
||||
First we build the plugin:
|
||||
|
||||
@ -148,7 +148,7 @@ $ cd ${INTEL_DEVICE_PLUGINS_SRC}
|
||||
$ make gpu_plugin
|
||||
```
|
||||
|
||||
### Run the plugin as administrator
|
||||
#### Run the plugin as administrator
|
||||
|
||||
Now we can run the plugin directly on the node:
|
||||
|
||||
@ -158,7 +158,7 @@ device-plugin start server at: /var/lib/kubelet/device-plugins/gpu.intel.com-i91
|
||||
device-plugin registered
|
||||
```
|
||||
|
||||
## Verify plugin registration
|
||||
### Verify plugin registration
|
||||
|
||||
You can verify the plugin has been registered with the expected nodes by searching for the relevant
|
||||
resource allocation status on the nodes:
|
||||
@ -169,7 +169,7 @@ master
|
||||
i915: 1
|
||||
```
|
||||
|
||||
## Testing the plugin
|
||||
### Testing the plugin
|
||||
|
||||
We can test the plugin is working by deploying the provided example OpenCL image with FFT offload enabled.
|
||||
|
||||
|
@ -1,18 +1,18 @@
|
||||
# Intel One Operator for Device Plugins
|
||||
|
||||
# Table of Contents
|
||||
Table of Contents
|
||||
|
||||
* [Introduction](#introduction)
|
||||
* [Installation](#installation)
|
||||
|
||||
# Introduction
|
||||
## Introduction
|
||||
|
||||
This One Operator is a Kubernetes custom controller whose goal is to serve the
|
||||
installation and lifecycle management of Intel device plugins for Kubernetes.
|
||||
It provides a single point of control for GPU, QAT and FPGA devices to a cluster
|
||||
administrators.
|
||||
|
||||
# Installation
|
||||
## Installation
|
||||
|
||||
The operator depends on [cert-manager](https://cert-manager.io/) running in the cluster.
|
||||
To install it run:
|
||||
@ -71,4 +71,4 @@ $ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/
|
||||
```
|
||||
|
||||
Now you can deploy the device plugins by creating corresponding custom resources.
|
||||
The samples for them are available [here](../../deployments/operator/samples/).
|
||||
The samples for them are available [here](/deployments/operator/samples/).
|
@ -1,6 +1,6 @@
|
||||
# Intel QuickAssist Technology (QAT) device plugin for Kubernetes
|
||||
|
||||
# Table of Contents
|
||||
Table of Contents
|
||||
|
||||
* [Introduction](#introduction)
|
||||
* [Modes and Configuration options](#modes-and-configuration-options)
|
||||
@ -26,7 +26,7 @@
|
||||
* [OpenSSL QAT demo](#openssl-qat-demo)
|
||||
* [Checking for hardware](#checking-for-hardware)
|
||||
|
||||
# Introduction
|
||||
## Introduction
|
||||
|
||||
This Intel QAT device plugin provides support for Intel QAT devices under Kubernetes.
|
||||
The supported devices are determined by the VF device drivers available in your Linux
|
||||
@ -44,7 +44,7 @@ Demonstrations are provided utilising [DPDK](https://doc.dpdk.org/) and [OpenSSL
|
||||
[Kata Containers](https://katacontainers.io/) QAT integration is documented in the
|
||||
[Kata Containers documentation repository][6].
|
||||
|
||||
## Modes and Configuration options
|
||||
### Modes and Configuration options
|
||||
|
||||
The QAT plugin can take a number of command line arguments, summarised in the following table:
|
||||
|
||||
@ -58,9 +58,9 @@ The QAT plugin can take a number of command line arguments, summarised in the fo
|
||||
The plugin also accepts a number of other arguments related to logging. Please use the `-h` option to see
|
||||
the complete list of logging related options.
|
||||
|
||||
The example [DaemonSet YAML](../../deployments/qat_plugin/base/intel-qat-plugin.yaml) passes a number of these
|
||||
The example [DaemonSet YAML](/deployments/qat_plugin/base/intel-qat-plugin.yaml) passes a number of these
|
||||
arguments, and takes its default values from the
|
||||
[QAT default ConfigMap](../../deployments/qat_plugin/base/intel-qat-plugin-config.yaml). The following
|
||||
[QAT default ConfigMap](/deployments/qat_plugin/base/intel-qat-plugin-config.yaml). The following
|
||||
table summarises the defaults:
|
||||
|
||||
| Argument | Variable | Default setting | Explanation |
|
||||
@ -86,13 +86,13 @@ The `kerneldrv` mode does not guarantee full device isolation between containers
|
||||
and therefore it's not recommended. This mode will be deprecated and removed once `libqat`
|
||||
implements non-UIO based device access.
|
||||
|
||||
# Installation
|
||||
## Installation
|
||||
|
||||
The below sections cover how to obtain, build and install this component.
|
||||
|
||||
The component can be installed either using a DaemonSet or running 'by hand' on each node.
|
||||
|
||||
## Prerequisites
|
||||
### Prerequisites
|
||||
|
||||
The component has the same basic dependancies as the
|
||||
[generic plugin framework dependencies](../../README.md#about).
|
||||
@ -107,7 +107,7 @@ are available via two methods. One of them must be installed and enabled:
|
||||
|
||||
The demonstrations have their own requirements, listed in their own specific sections.
|
||||
|
||||
## Pre-built image
|
||||
### Pre-built image
|
||||
|
||||
[Pre-built images](https://hub.docker.com/r/intel/intel-qat-plugin)
|
||||
of this component are available on the Docker hub. These images are automatically built and uploaded
|
||||
@ -134,17 +134,17 @@ tag by default. If you do not build your own local images, your Kubernetes clust
|
||||
the devel images from the Docker hub by default.
|
||||
|
||||
To use the release tagged versions of the images, edit the
|
||||
[YAML deployment files](../../deployments/qat_plugin/base/)
|
||||
[YAML deployment files](/deployments/qat_plugin/base/)
|
||||
appropriately.
|
||||
|
||||
## Getting the source code
|
||||
### Getting the source code
|
||||
|
||||
```bash
|
||||
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
|
||||
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
|
||||
```
|
||||
|
||||
## Verify node kubelet config
|
||||
### Verify node kubelet config
|
||||
|
||||
Every node that will be running the plugin must have the
|
||||
[kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
|
||||
@ -155,17 +155,17 @@ $ ls /var/lib/kubelet/device-plugins/kubelet.sock
|
||||
/var/lib/kubelet/device-plugins/kubelet.sock
|
||||
```
|
||||
|
||||
## Deploying as a DaemonSet
|
||||
### Deploying as a DaemonSet
|
||||
|
||||
To deploy the plugin as a DaemonSet, you first need to build a container image for the plugin and
|
||||
ensure that is visible to your nodes. If you do not build your own plugin, your cluster may pull
|
||||
the image from the pre-built Docker Hub images, depending on your configuration.
|
||||
|
||||
### Build the plugin image
|
||||
#### Build the plugin image
|
||||
|
||||
The following will use `docker` to build a local container image called `intel/intel-qat-plugin`
|
||||
with the tag `devel`. The image build tool can be changed from the default docker by setting the
|
||||
`BUILDER` argument to the [Makefile](../../Makefile).
|
||||
`BUILDER` argument to the [Makefile](/Makefile).
|
||||
|
||||
```bash
|
||||
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
|
||||
@ -177,11 +177,11 @@ Successfully tagged intel/intel-qat-plugin:devel
|
||||
> **Note**: `kerneldrv` mode is excluded from the build by default. Add `EXTRA_BUILD_ARGS="--build-arg TAGS_KERNELDRV=kerneldrv"` to `make`
|
||||
> to get `kerneldrv` functionality added to the build.
|
||||
|
||||
### Deploy the DaemonSet
|
||||
#### Deploy the DaemonSet
|
||||
|
||||
Deploying the plugin involves first the deployment of a
|
||||
[ConfigMap](../../deployments/qat_plugin/base/intel-qat-plugin-config.yaml) and the
|
||||
[DaemonSet YAML](../../deployments/qat_plugin/base/intel-qat-plugin.yaml).
|
||||
[ConfigMap](/deployments/qat_plugin/base/intel-qat-plugin-config.yaml) and the
|
||||
[DaemonSet YAML](/deployments/qat_plugin/base/intel-qat-plugin.yaml).
|
||||
|
||||
There is a kustomization for deploying both:
|
||||
```bash
|
||||
@ -205,7 +205,7 @@ $ kubectl create -f ${INTEL_DEVICE_PLUGINS_SRC}/deployments/qat_plugin/base/inte
|
||||
> socket creation and kubelet registration. Furthermore, the deployments `securityContext` must
|
||||
> be configured with appropriate `runAsUser/runAsGroup`.
|
||||
|
||||
### Verify QAT device plugin is registered on master:
|
||||
#### Verify QAT device plugin is registered on master:
|
||||
|
||||
Verification of the plugin deployment and detection of QAT hardware can be confirmed by
|
||||
examining the resource allocations on the nodes:
|
||||
@ -216,19 +216,19 @@ $ kubectl describe node <node name> | grep qat.intel.com/generic
|
||||
qat.intel.com/generic: 10
|
||||
```
|
||||
|
||||
## Deploying by hand
|
||||
### Deploying by hand
|
||||
|
||||
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
|
||||
In this case, you do not need to build the complete container image, and can build just the plugin.
|
||||
|
||||
### Build QAT device plugin
|
||||
#### Build QAT device plugin
|
||||
|
||||
```bash
|
||||
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
|
||||
$ make qat_plugin
|
||||
```
|
||||
|
||||
### Deploy QAT plugin
|
||||
#### Deploy QAT plugin
|
||||
|
||||
Deploy the plugin on a node by running it as `root`. The below is just an example - modify the
|
||||
paramaters as necessary for your setup:
|
||||
@ -254,12 +254,12 @@ device-plugin registered
|
||||
ListAndWatch: Sending device response
|
||||
```
|
||||
|
||||
## QAT device plugin Demos
|
||||
### QAT device plugin Demos
|
||||
|
||||
The below sections cover `DPDK` and `OpenSSL` demos, both of which utilise the
|
||||
QAT device plugin under Kubernetes.
|
||||
|
||||
### DPDK QAT demos
|
||||
#### DPDK QAT demos
|
||||
|
||||
The Data Plane Development Kit (DPDK) QAT demos use DPDK
|
||||
[crypto-perf](https://doc.dpdk.org/guides/tools/cryptoperf.html) and
|
||||
@ -267,14 +267,14 @@ The Data Plane Development Kit (DPDK) QAT demos use DPDK
|
||||
DPDK QAT Poll-Mode Drivers (PMD). For more information on the tools' parameters, refer to the
|
||||
website links.
|
||||
|
||||
#### DPDK Prerequisites
|
||||
##### DPDK Prerequisites
|
||||
|
||||
For the DPDK QAT demos to work, the DPDK drivers must be loaded and configured.
|
||||
For more information, refer to:
|
||||
[DPDK Getting Started Guide for Linux](https://doc.dpdk.org/guides/linux_gsg/index.html) and
|
||||
[DPDK Getting Started Guide, Linux Drivers section](http://dpdk.org/doc/guides/linux_gsg/linux_drivers.html)
|
||||
|
||||
#### Build the image
|
||||
##### Build the image
|
||||
|
||||
The demo uses a container image. You can either use the
|
||||
[pre-built image from the Docker Hub](https://hub.docker.com/r/intel/crypto-perf), or build your own local copy.
|
||||
@ -288,7 +288,7 @@ $ ./build-image.sh crypto-perf
|
||||
Successfully tagged crypto-perf:devel
|
||||
```
|
||||
|
||||
#### Deploy the pod
|
||||
##### Deploy the pod
|
||||
|
||||
In the pod specification file, add container resource request and limit.
|
||||
For example, `qat.intel.com/generic: <number of devices>` for a container requesting QAT devices.
|
||||
@ -307,7 +307,7 @@ $ kubectl get pods
|
||||
> **Note**: The deployment example above uses [kustomize](https://github.com/kubernetes-sigs/kustomize)
|
||||
> that is available in kubectl since Kubernetes v1.14 release.
|
||||
|
||||
#### Manual test run
|
||||
##### Manual test run
|
||||
|
||||
Manually execute the `dpdk-test-crypto-perf` application to review the logs:
|
||||
|
||||
@ -324,7 +324,7 @@ $ dpdk-test-crypto-perf -l 6-7 -w $QAT1 \
|
||||
|
||||
> **Note**: Adapt the `.so` versions to what the DPDK version in the container provides.
|
||||
|
||||
#### Automated test run
|
||||
##### Automated test run
|
||||
|
||||
It is also possible to deploy and run `crypto-perf` using the following
|
||||
`kustomize` overlays:
|
||||
@ -339,12 +339,12 @@ $ kubectl logs qat-dpdk-test-compress-perf-tc1
|
||||
> **Note**: for `test-crypto1` and `test-compress1` to work, the cluster must enable
|
||||
[Kubernetes CPU manager's](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/) `static` policy.
|
||||
|
||||
### OpenSSL QAT demo
|
||||
#### OpenSSL QAT demo
|
||||
|
||||
Please refer to the [Kata Containers documentation][8] for details on the OpenSSL
|
||||
QAT acceleration demo.
|
||||
|
||||
# Checking for hardware
|
||||
## Checking for hardware
|
||||
|
||||
In order to utilise the QAT device plugin, QuickAssist SR-IOV virtual functions must be configured.
|
||||
You can verify this on your nodes by checking for the relevant PCI identifiers:
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Intel Software Guard Extensions (SGX) device plugin for Kubernetes
|
||||
|
||||
# Table of Contents
|
||||
Contents
|
||||
|
||||
* [Introduction](#introduction)
|
||||
* [Installation](#installation)
|
||||
@ -15,7 +15,7 @@
|
||||
* [Build SGX device plugin](#build-sgx-device-plugin)
|
||||
* [Deploy SGX plugin](#deploy-sgx-plugin)
|
||||
|
||||
# Introduction
|
||||
## Introduction
|
||||
|
||||
**Note:** The work is still WIP. The SGX device plugin can be tested to run simple enclaves
|
||||
but the full e2e deployment (including the SGX remote attestation) is not yet finished. See
|
||||
@ -23,7 +23,7 @@ the open issues for details.
|
||||
|
||||
This Intel SGX device plugin provides support for Intel SGX TEE under Kubernetes.
|
||||
|
||||
## Modes and Configuration options
|
||||
### Modes and Configuration options
|
||||
|
||||
The SGX plugin can take a number of command line arguments, summarised in the following table:
|
||||
|
||||
@ -35,13 +35,13 @@ The SGX plugin can take a number of command line arguments, summarised in the fo
|
||||
The plugin also accepts a number of other arguments related to logging. Please use the `-h` option to see
|
||||
the complete list of logging related options.
|
||||
|
||||
# Installation
|
||||
## Installation
|
||||
|
||||
The below sections cover how to obtain, build and install this component.
|
||||
|
||||
The component can be installed either using a DaemonSet or running 'by hand' on each node.
|
||||
|
||||
## Prerequisites
|
||||
### Prerequisites
|
||||
|
||||
The component has the same basic dependancies as the
|
||||
[generic plugin framework dependencies](../../README.md#about).
|
||||
@ -49,14 +49,14 @@ The component has the same basic dependancies as the
|
||||
The SGX plugin requires Linux Kernel SGX drivers to be available. These drivers
|
||||
are currently available via RFC patches on Linux Kernel Mailing List.
|
||||
|
||||
## Getting the source code
|
||||
### Getting the source code
|
||||
|
||||
```bash
|
||||
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
|
||||
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
|
||||
```
|
||||
|
||||
## Verify node kubelet config
|
||||
### Verify node kubelet config
|
||||
|
||||
Every node that will be running the plugin must have the
|
||||
[kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
|
||||
@ -67,16 +67,16 @@ $ ls /var/lib/kubelet/device-plugins/kubelet.sock
|
||||
/var/lib/kubelet/device-plugins/kubelet.sock
|
||||
```
|
||||
|
||||
## Deploying as a DaemonSet
|
||||
### Deploying as a DaemonSet
|
||||
|
||||
To deploy the plugin as a DaemonSet, you first need to build a container image for the plugin and
|
||||
ensure that is visible to your nodes.
|
||||
|
||||
### Build the plugin and EPC source images
|
||||
#### Build the plugin and EPC source images
|
||||
|
||||
The following will use `docker` to build a local container images called `intel/intel-sgx-plugin`
|
||||
and `intel/intel-sgx-initcontainer` with the tag `devel`. The image build tool can be changed from the
|
||||
default docker by setting the `BUILDER` argument to the [Makefile](../../Makefile).
|
||||
default docker by setting the `BUILDER` argument to the [Makefile](/Makefile).
|
||||
|
||||
```bash
|
||||
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
|
||||
@ -88,11 +88,11 @@ $ make intel-sgx-initcontainer
|
||||
Successfully tagged intel/intel-sgx-initcontainer:devel
|
||||
```
|
||||
|
||||
### Deploy the DaemonSet
|
||||
#### Deploy the DaemonSet
|
||||
|
||||
Deploying the plugin involves the deployment of a
|
||||
[NFD EPC Source InitContainer Job](../../deployments/sgx_plugin/base/intel-sgx-hookinstall.yaml) the
|
||||
[DaemonSet YAML](../../deployments/sgx_plugin/base/intel-sgx-plugin.yaml), and node-feature-discovery
|
||||
[NFD EPC Source InitContainer Job](/deployments/sgx_plugin/base/intel-sgx-hookinstall.yaml) the
|
||||
[DaemonSet YAML](/deployments/sgx_plugin/base/intel-sgx-plugin.yaml), and node-feature-discovery
|
||||
with the necessary configuration.
|
||||
|
||||
There is a kustomization for deploying everything:
|
||||
@ -100,7 +100,7 @@ There is a kustomization for deploying everything:
|
||||
$ kubectl apply -k ${INTEL_DEVICE_PLUGINS_SRC}/deployments/sgx_plugin/overlays/nfd
|
||||
```
|
||||
|
||||
### Verify SGX device plugin is registered on master:
|
||||
#### Verify SGX device plugin is registered on master:
|
||||
|
||||
Verification of the plugin deployment and detection of SGX hardware can be confirmed by
|
||||
examining the resource allocations on the nodes:
|
||||
@ -119,19 +119,19 @@ $ kubectl describe node <node name> | grep sgx.intel.com
|
||||
sgx.intel.com/provision 1 1
|
||||
```
|
||||
|
||||
## Deploying by hand
|
||||
### Deploying by hand
|
||||
|
||||
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
|
||||
In this case, you do not need to build the complete container image, and can build just the plugin.
|
||||
|
||||
### Build SGX device plugin
|
||||
#### Build SGX device plugin
|
||||
|
||||
```bash
|
||||
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
|
||||
$ make sgx_plugin
|
||||
```
|
||||
|
||||
### Deploy SGX plugin
|
||||
#### Deploy SGX plugin
|
||||
|
||||
Deploy the plugin on a node by running it as `root`. The below is just an example - modify the
|
||||
paramaters as necessary for your setup:
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Intel VPU device plugin for Kubernetes
|
||||
|
||||
# Table of Contents
|
||||
Table of Contents
|
||||
|
||||
* [Introduction](#introduction)
|
||||
* [Installation](#installation)
|
||||
@ -18,7 +18,7 @@
|
||||
* [Create a job running unit tests off the local Docker image](#create-a-job-running-unit-tests-off-the-local-docker-image)
|
||||
* [Review the job logs](#review-the-job-logs)
|
||||
|
||||
# Introduction
|
||||
## Introduction
|
||||
|
||||
The VPU device plugin supports below cards:
|
||||
|
||||
@ -38,13 +38,13 @@ This card has:
|
||||
> To get VCAC-A or Mustang card running hddl, please refer to:
|
||||
> https://github.com/OpenVisualCloud/Dockerfiles/blob/master/VCAC-A/script/setup_hddl.sh
|
||||
|
||||
# Installation
|
||||
## Installation
|
||||
|
||||
The following sections detail how to obtain, build, deploy and test the VPU device plugin.
|
||||
|
||||
Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis.
|
||||
|
||||
## Getting the source code
|
||||
### Getting the source code
|
||||
|
||||
> **Note:** It is presumed you have a valid and configured [golang](https://golang.org/) environment
|
||||
> that meets the minimum required version.
|
||||
@ -54,7 +54,7 @@ $ mkdir -p $(go env GOPATH)/src/github.com/intel
|
||||
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
|
||||
```
|
||||
|
||||
## Verify node kubelet config
|
||||
### Verify node kubelet config
|
||||
|
||||
Every node that will be running the vpu plugin must have the
|
||||
[kubelet device-plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
|
||||
@ -65,18 +65,18 @@ $ ls /var/lib/kubelet/device-plugins/kubelet.sock
|
||||
/var/lib/kubelet/device-plugins/kubelet.sock
|
||||
```
|
||||
|
||||
## Deploying as a DaemonSet
|
||||
### Deploying as a DaemonSet
|
||||
|
||||
To deploy the vpu plugin as a daemonset, you first need to build a container image for the
|
||||
plugin and ensure that is visible to your nodes.
|
||||
|
||||
### Build the plugin image
|
||||
#### Build the plugin image
|
||||
|
||||
The following will use `docker` to build a local container image called
|
||||
`intel/intel-vpu-plugin` with the tag `devel`.
|
||||
|
||||
The image build tool can be changed from the default `docker` by setting the `BUILDER` argument
|
||||
to the [`Makefile`](Makefile).
|
||||
to the [`Makefile`](/Makefile).
|
||||
|
||||
```bash
|
||||
$ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
|
||||
@ -85,9 +85,9 @@ $ make intel-vpu-plugin
|
||||
Successfully tagged intel/intel-vpu-plugin:devel
|
||||
```
|
||||
|
||||
### Deploy plugin DaemonSet
|
||||
#### Deploy plugin DaemonSet
|
||||
|
||||
You can then use the [example DaemonSet YAML](../../deployments/vpu_plugin/base/intel-vpu-plugin.yaml)
|
||||
You can then use the [example DaemonSet YAML](/deployments/vpu_plugin/base/intel-vpu-plugin.yaml)
|
||||
file provided to deploy the plugin. The default kustomization that deploys the YAML as is:
|
||||
|
||||
```bash
|
||||
@ -99,12 +99,12 @@ daemonset.apps/intel-vpu-plugin created
|
||||
the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
|
||||
Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
|
||||
|
||||
## Deploy by hand
|
||||
### Deploy by hand
|
||||
|
||||
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node.
|
||||
In this case, you do not need to build the complete container image, and can build just the plugin.
|
||||
|
||||
### Build the plugin
|
||||
#### Build the plugin
|
||||
|
||||
First we build the plugin:
|
||||
|
||||
@ -115,7 +115,7 @@ $ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
|
||||
$ make vpu_plugin
|
||||
```
|
||||
|
||||
### Run the plugin as administrator
|
||||
#### Run the plugin as administrator
|
||||
|
||||
Now we can run the plugin directly on the node:
|
||||
|
||||
@ -124,7 +124,7 @@ $ sudo $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
|
||||
VPU device plugin started
|
||||
```
|
||||
|
||||
## Verify plugin registration
|
||||
### Verify plugin registration
|
||||
|
||||
You can verify the plugin has been registered with the expected nodes by searching for the relevant
|
||||
resource allocation status on the nodes:
|
||||
@ -135,11 +135,11 @@ vcaanode00
|
||||
hddl: 12
|
||||
```
|
||||
|
||||
## Testing the plugin
|
||||
### Testing the plugin
|
||||
|
||||
We can test the plugin is working by deploying the provided example OpenVINO image with HDDL plugin enabled.
|
||||
|
||||
### Build a Docker image with an classification example
|
||||
#### Build a Docker image with an classification example
|
||||
|
||||
```bash
|
||||
$ cd demo
|
||||
@ -148,7 +148,7 @@ $ ./build-image.sh ubuntu-demo-openvino
|
||||
Successfully tagged ubuntu-demo-openvino:devel
|
||||
```
|
||||
|
||||
### Create a job running unit tests off the local Docker image
|
||||
#### Create a job running unit tests off the local Docker image
|
||||
|
||||
```bash
|
||||
$ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
|
||||
@ -156,7 +156,7 @@ $ kubectl apply -f demo/intelvpu-job.yaml
|
||||
job.batch/intelvpu-demo-job created
|
||||
```
|
||||
|
||||
### Review the job logs
|
||||
#### Review the job logs
|
||||
|
||||
```bash
|
||||
$ kubectl get pods | fgrep intelvpu
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Intel Device Plugin Demo for Kubernetes
|
||||
# Demo
|
||||
|
||||
## Table of Contents
|
||||
Table of Contents
|
||||
|
||||
- [Demo overview](#demo-overview)
|
||||
- [Intel® GPU Device Plugin demo video](#intel-gpu-device-plugin-demo-video)
|
||||
|
Loading…
Reference in New Issue
Block a user