vpu: remove deprecated plugin

The VPU plugin can only be used with devices that are
no longer supported by upper layers, such as OpenVINO.

The deprecation plan for the plugin was announced earlier
this year and post v0.28 marks the date when the plugin is removed
from the repo.

Releases before v0.29 have the plugin available should it
be needed.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
This commit is contained in:
Mikko Ylinen 2023-09-26 07:30:14 +03:00
parent 4ae331088b
commit 319843c94e
20 changed files with 3 additions and 1082 deletions

1
.gitignore vendored
View File

@ -19,7 +19,6 @@ cmd/qat_plugin/qat_plugin
cmd/sgx_admissionwebhook/sgx_admissionwebhook
cmd/sgx_plugin/sgx_plugin
cmd/sgx_epchook/sgx_epchook
cmd/vpu_plugin/vpu_plugin
cmd/operator/operator
deployments/fpga_admissionwebhook/base/intel-fpga-webhook-certs-secret

View File

@ -31,8 +31,8 @@ TESTDATA_DIR = pkg/topology/testdata
EXTRA_BUILD_ARGS += --build-arg GOLICENSES_VERSION=$(GOLICENSES_VERSION)
pkgs = $(shell $(GO) list ./... | grep -v vendor | grep -v e2e | grep -v envtest | grep -v vpu_plugin)
cmds = $(shell ls --ignore=internal --ignore=vpu_plugin cmd)
pkgs = $(shell $(GO) list ./... | grep -v vendor | grep -v e2e | grep -v envtest)
cmds = $(shell ls --ignore=internal cmd)
all: build
@ -233,12 +233,10 @@ null :=
space := $(null) #
comma := ,
images_json := $(subst $(space),$(comma),[$(addprefix ",$(addsuffix ",$(images) $(demos))]))
skip_images_source := ubuntu-demo-openvino intel-vpu-plugin
skip_images := $(subst $(space),$(comma),$(addprefix ",$(addsuffix ", $(skip_images_source))))
check-github-actions:
@python3 -c 'import sys, yaml, json; json.dump(yaml.load(sys.stdin, Loader=yaml.SafeLoader), sys.stdout)' < .github/workflows/lib-build.yaml | \
jq -e '$(images_json) - [$(skip_images)] - .jobs.image.strategy.matrix.image == []' > /dev/null || \
jq -e '$(images_json) - .jobs.image.strategy.matrix.image == []' > /dev/null || \
(echo "Make sure all images are listed in .github/workflows/lib-build.yaml"; exit 1)
.PHONY: all format test lint build images $(cmds) $(images) lock-images vendor pre-pull set-version check-github-actions envtest fixture update-fixture install-tools test-image-base-layer

View File

@ -17,7 +17,6 @@ Table of Contents
* [GPU device plugin](#gpu-device-plugin)
* [FPGA device plugin](#fpga-device-plugin)
* [QAT device plugin](#qat-device-plugin)
* [VPU device plugin](#vpu-device-plugin)
* [SGX device plugin](#sgx-device-plugin)
* [DSA device plugin](#dsa-device-plugin)
* [DLB device plugin](#dlb-device-plugin)
@ -108,18 +107,6 @@ Details for integrating the QAT device plugin into [Kata Containers](https://kat
can be found in the
[Kata Containers documentation repository](https://github.com/kata-containers/kata-containers/blob/main/docs/use-cases/using-Intel-QAT-and-kata.md).
### VPU Device Plugin
The [VPU device plugin](cmd/vpu_plugin/README.md) supports Intel VCAC-A card
(https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/media-analytics-vcac-a-accelerator-card-by-celestica-datasheet.pdf)
the card has:
- 1 Intel Core i3-7100U processor
- 12 MyriadX VPUs
- 8GB DDR4 memory
The demo subdirectory includes details of a OpenVINO deployment and use of the
VPU plugin. Sources can be found in [openvino-demo](demo/ubuntu-demo-openvino).
### SGX Device Plugin
The [SGX device plugin](cmd/sgx_plugin/README.md) allows workloads to use
@ -249,8 +236,6 @@ The summary of resources available via plugins in this repository is given in th
* [crypto-perf-dpdk-pod-requesting-qat.yaml](deployments/qat_dpdk_app/base/crypto-perf-dpdk-pod-requesting-qat.yaml)
* `sgx.intel.com` : `epc`
* [intelsgx-job.yaml](deployments/sgx_enclave_apps/base/intelsgx-job.yaml)
* `vpu.intel.com` : `hddl`
* [intelvpu-job.yaml](demo/intelvpu-job.yaml)
## Developers

View File

@ -1,64 +0,0 @@
## This is a generated file, do not edit directly. Edit build/docker/templates/intel-vpu-plugin.Dockerfile.in instead.
##
## Copyright 2022 Intel Corporation. All Rights Reserved.
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
###
## FINAL_BASE can be used to configure the base image of the final image.
##
## This is used in two ways:
## 1) make <image-name> BUILDER=<docker|buildah>
## 2) docker build ... -f <image-name>.Dockerfile
##
## The project default is 1) which sets FINAL_BASE=gcr.io/distroless/static
## (see build-image.sh).
## 2) and the default FINAL_BASE is primarily used to build Redhat Certified Openshift Operator container images that must be UBI based.
## The RedHat build tool does not allow additional image build parameters.
ARG FINAL_BASE=registry.access.redhat.com/ubi9-micro:latest
###
##
## GOLANG_BASE can be used to make the build reproducible by choosing an
## image by its hash:
## GOLANG_BASE=golang@sha256:9d64369fd3c633df71d7465d67d43f63bb31192193e671742fa1c26ebc3a6210
##
## This is used on release branches before tagging a stable version.
## The main branch defaults to using the latest Golang base image.
ARG GOLANG_BASE=golang:1.21-bookworm
###
FROM ${GOLANG_BASE} as builder
ARG DIR=/intel-device-plugins-for-kubernetes
ARG GO111MODULE=on
ARG BUILDFLAGS="-ldflags=-w -s"
ARG GOLICENSES_VERSION
ARG CMD=vpu_plugin
WORKDIR $DIR
COPY . .
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN echo "deb-src http://deb.debian.org/debian unstable main" | tee -a /etc/apt/sources.list
RUN apt-get update && apt-get --no-install-recommends -y install dpkg-dev libusb-1.0-0-dev
RUN mkdir -p /install_root/licenses/libusb && (cd /install_root/licenses/libusb && apt-get --download-only source libusb-1.0-0)
RUN (cd cmd/$CMD; GO111MODULE=${GO111MODULE} CGO_ENABLED=1 go install "${BUILDFLAGS}") && install -D /go/bin/vpu_plugin /install_root/usr/local/bin/intel_vpu_device_plugin
RUN install -D ${DIR}/LICENSE /install_root/licenses/intel-device-plugins-for-kubernetes/LICENSE \
&& if [ ! -d "licenses/$CMD" ] ; then \
GO111MODULE=on go run github.com/google/go-licenses@${GOLICENSES_VERSION} save "./cmd/$CMD" \
--save_path /install_root/licenses/$CMD/go-licenses ; \
else mkdir -p /install_root/licenses/$CMD/go-licenses/ && cd licenses/$CMD && cp -r * /install_root/licenses/$CMD/go-licenses/ ; fi
FROM debian:unstable-slim
LABEL vendor='Intel®'
LABEL version='devel'
LABEL release='1'
LABEL name='intel-vpu-plugin'
LABEL summary='Intel® VPU device plugin for Kubernetes'
RUN apt-get update && apt-get --no-install-recommends -y install libusb-1.0-0 && rm -rf /var/lib/apt/lists/\*
COPY --from=builder /install_root /
ENTRYPOINT ["/usr/local/bin/intel_vpu_device_plugin"]

View File

@ -1,35 +0,0 @@
#include "final_base.docker"
#include "golang_base.docker"
FROM ${GOLANG_BASE} as builder
#include "default_args.docker"
ARG CMD=vpu_plugin
WORKDIR $DIR
COPY . .
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN echo "deb-src http://deb.debian.org/debian unstable main" | tee -a /etc/apt/sources.list
RUN apt-get update && apt-get --no-install-recommends -y install dpkg-dev libusb-1.0-0-dev
RUN mkdir -p /install_root/licenses/libusb \
&& (cd /install_root/licenses/libusb && apt-get --download-only source libusb-1.0-0)
RUN (cd cmd/$CMD; GO111MODULE=${GO111MODULE} CGO_ENABLED=1 go install "${BUILDFLAGS}") \
&& install -D /go/bin/vpu_plugin /install_root/usr/local/bin/intel_vpu_device_plugin
#include "default_licenses.docker"
FROM debian:unstable-slim
#include "default_labels.docker"
LABEL name='intel-vpu-plugin'
LABEL summary='Intel® VPU device plugin for Kubernetes'
RUN apt-get update && apt-get --no-install-recommends -y install libusb-1.0-0 && rm -rf /var/lib/apt/lists/\*
COPY --from=builder /install_root /
ENTRYPOINT ["/usr/local/bin/intel_vpu_device_plugin"]

View File

@ -1,199 +0,0 @@
# Intel VPU device plugin for Kubernetes
Table of Contents
* [Introduction](#introduction)
* [Installation](#installation)
* [Pre-built Images](#pre-built-images)
* [Verify Plugin Registration](#verify-plugin-registration)
* [Testing and Demos](#testing-and-demos)
## Introduction
The VPU device plugin supports below cards:
[Intel VCAC-A](https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/media-analytics-vcac-a-accelerator-card-by-celestica-datasheet.pdf).
This card has:
- 1 Intel Core i3-7100U processor
- 12 MyriadX VPUs
- 8GB DDR4 memory
- PCIe interface to Xeon E3/E5 server
[Intel Mustang V100](https://software.intel.com/en-us/articles/introducing-the-iei-tank-aiot-developer-kit-and-mustang-v100-mx8-pcie-accelerator-card).
This card has:
- 8 MyriadX VPUs
- PCIe interface to 6th+ Generation Core PC or Xeon E3/E5 server
[Gen 3 Intel® Movidius™ VPU HDDL VE3](https://www.intel.com/content/www/us/en/products/details/processors/movidius-vpu.html)
This card has:
- 3 Intel® Movidius Gen 3 Intel® Movidius™ VPU SoCs
[Intel® Movidius™ S VPU](https://www.intel.com/content/www/us/en/products/details/processors/movidius-vpu.html)
This card has:
- 6 Intel® Movidius Gen 3 Intel® Movidius™ VPU SoCs
> **Note:** This device plugin need HDDL daemon service to be running either natively or from a container.
> To get VCAC-A or Mustang card running hddl, please refer to:
> https://github.com/OpenVisualCloud/Dockerfiles/blob/master/VCAC-A/script/setup_hddl.sh
## Installation
The following sections detail how to use the VPU device plugin.
### Pre-built Images
[Pre-built images](https://hub.docker.com/r/intel/intel-vpu-plugin)
of this component are available on the Docker hub. These images are automatically built and uploaded
to the hub from the latest main branch of this repository.
Release tagged images of the components are also available on the Docker hub, tagged with their
release version numbers in the format `x.y.z`, corresponding to the branches and releases in this
repository. Thus the easiest way to deploy the plugin in your cluster is to run this command
```bash
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/vpu_plugin?ref=<RELEASE_VERSION>'
daemonset.apps/intel-vpu-plugin created
```
Where `<RELEASE_VERSION>` needs to be substituted with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
For xlink device, deploy DaemonSet as
```bash
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/vpu_plugin/overlays/xlink
daemonset.apps/intel-vpu-plugin created
```
Nothing else is needed. See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the plugin.
> **Note**: It is also possible to run the VPU device plugin using a non-root user. To do this,
the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
### Verify Plugin Registration
You can verify the plugin has been registered with the expected nodes by searching for the relevant
resource allocation status on the nodes:
```bash
$ kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{'\n'}{' hddl: '}{.status.allocatable.vpu\.intel\.com/hddl}{'\n'}"
vcaanode00
hddl: 12
```
## Testing and Demos
We can test the plugin is working by deploying the provided example OpenVINO image with HDDL plugin enabled.
### Build a Docker image with an classification example
```bash
$ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
$ make ubuntu-demo-openvino
...
Successfully tagged intel/ubuntu-demo-openvino:devel
```
### Create a job running unit tests off the local Docker image
```bash
$ cd $(go env GOPATH)/src/github.com/intel/intel-device-plugins-for-kubernetes
$ kubectl apply -f demo/intelvpu-job.yaml
job.batch/intelvpu-demo-job created
```
### Review the job logs
```bash
$ kubectl get pods | fgrep intelvpu
# substitute the 'xxxxx' below for the pod name listed in the above
$ kubectl logs intelvpu-demo-job-xxxxx
+ export HDDL_INSTALL_DIR=/root/hddl
+ HDDL_INSTALL_DIR=/root/hddl
+ export LD_LIBRARY_PATH=/root/inference_engine_samples_build/intel64/Release/lib/
+ LD_LIBRARY_PATH=/root/inference_engine_samples_build/intel64/Release/lib/
+ /root/inference_engine_samples_build/intel64/Release/classification_sample_async -m /root/openvino_models/ir/FP16/classification/squeezenet/1.1/caffe/squeezenet1.1.xml -i /root/car.png -d HDDL
[ INFO ] InferenceEngine:
API version ............ 2.0
Build .................. custom_releases/2019/R2_f5827d4773ebbe727c9acac5f007f7d94dd4be4e
Description ....... API
[ INFO ] Parsing input parameters
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] /root/car.png
[ INFO ] Creating Inference Engine
HDDL
HDDLPlugin version ......... 2.0
Build ........... 27579
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (787, 259) to (227, 227)
[ INFO ] Batch size is 1
[ INFO ] Loading model to the device
[07:49:01.0427][6]I[ServiceStarter.cpp:40] Info: Waiting for HDDL Service getting ready ...
[07:49:01.0428][6]I[ServiceStarter.cpp:45] Info: Found HDDL Service is running.
[HDDLPlugin] [07:49:01.0429][6]I[HddlClient.cpp:256] Hddl api version: 2.2
[HDDLPlugin] [07:49:01.0429][6]I[HddlClient.cpp:259] Info: Create Dispatcher2.
[HDDLPlugin] [07:49:01.0432][10]I[Dispatcher2.cpp:148] Info: SenderRoutine starts.
[HDDLPlugin] [07:49:01.0432][6]I[HddlClient.cpp:270] Info: RegisterClient HDDLPlugin.
[HDDLPlugin] [07:49:01.0435][6]I[HddlClient.cpp:275] Client Id: 3
[ INFO ] Create infer request
[HDDLPlugin] [07:49:01.7235][6]I[HddlBlob.cpp:166] Info: HddlBlob initialize ion ...
[HDDLPlugin] [07:49:01.7237][6]I[HddlBlob.cpp:176] Info: HddlBlob initialize ion successfully.
[ INFO ] Start inference (10 asynchronous executions)
[ INFO ] Completed 1 async request execution
[ INFO ] Completed 2 async request execution
[ INFO ] Completed 3 async request execution
[ INFO ] Completed 4 async request execution
[ INFO ] Completed 5 async request execution
[ INFO ] Completed 6 async request execution
[ INFO ] Completed 7 async request execution
[ INFO ] Completed 8 async request execution
[ INFO ] Completed 9 async request execution
[ INFO ] Completed 10 async request execution
[ INFO ] Processing output blobs
Top 10 results:
Image /root/car.png
classid probability label
------- ----------- -----
817 0.8295898 sports car, sport car
511 0.0961304 convertible
479 0.0439453 car wheel
751 0.0101318 racer, race car, racing car
436 0.0074234 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
656 0.0042267 minivan
586 0.0029869 half track
717 0.0018148 pickup, pickup truck
864 0.0013924 tow truck, tow car, wrecker
581 0.0006595 grille, radiator grille
[HDDLPlugin] [07:49:01.9231][11]I[Dispatcher2.cpp:212] Info: Listen Thread wake up and to exit.
[HDDLPlugin] [07:49:01.9232][6]I[Dispatcher2.cpp:81] Info: Client dispatcher exit.
[HDDLPlugin] [07:49:01.9235][6]I[HddlClient.cpp:203] Info: Hddl client unregistered.
[ INFO ] Execution successful
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
If the pod did not successfully launch, possibly because it could not obtain the vpu HDDL
resource, it will be stuck in the `Pending` status:
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
intelvpu-demo-job-xxxxx 0/1 Pending 0 8s
```
This can be verified by checking the Events of the pod:
```bash
$ kubectl describe pod intelvpu-demo-job-xxxxx
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 Insufficient vpu.intel.com/hddl.
```

View File

@ -1,343 +0,0 @@
// Copyright 2020 Intel Corporation. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"flag"
"fmt"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/google/gousb"
dpapi "github.com/intel/intel-device-plugins-for-kubernetes/pkg/deviceplugin"
"k8s.io/klog/v2"
pluginapi "k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1"
)
const (
// Movidius MyriadX Vendor ID.
vendorID = 0x03e7
// Device plugin settings.
namespace = "vpu.intel.com"
deviceType = "hddl"
hddlSockPath = "/var/tmp/hddl_service.sock"
hddlServicePath1 = "/var/tmp/hddl_service_ready.mutex"
hddlServicePath2 = "/var/tmp/hddl_service_alive.mutex"
ionDevNode = "/dev/ion"
// Frequency of device scans.
scanFrequency = 5 * time.Second
)
const (
vendorIDIntel = "0x8086"
xlinkDevNode = "/dev/xlnk"
hddlAlive = "/var/tmp/hddlunite_service_alive.mutex"
hddlReady = "/var/tmp/hddlunite_service_ready.mutex"
hddlStartExit = "/var/tmp/hddlunite_service_start_exit.mutex"
hddlSocketPci = "/var/tmp/hddlunite_service.sock"
sysBusPCIDevice = "/sys/bus/pci/devices"
)
var (
// Movidius MyriadX Product IDs.
productIDs = []int{0x2485, 0xf63b}
// PCI Product IDs.
productIDsPCI = []PCIPidDeviceType{
{"kmb", []string{"0x6240"}, 1},
{"tbh", []string{"0x4fc0", "0x4fc1"}, 2},
}
)
type gousbContext interface {
OpenDevices(opener func(desc *gousb.DeviceDesc) bool) ([]*gousb.Device, error)
}
type PCIPidDeviceType struct {
deviceType string
pids []string
ratio int
}
func getPciDeviceCounts(sysfsPciDevicesPath string, vendorID string, pidSearch []PCIPidDeviceType) ([]int, error) {
found := make([]int, len(pidSearch))
bdf, _ := os.ReadDir(sysfsPciDevicesPath)
// Check for all folder inside sysfs
for _, bus := range bdf {
// Extract vid and pid
vidRaw, _ := os.ReadFile(filepath.Join(sysfsPciDevicesPath, bus.Name(), "vendor"))
pidRaw, _ := os.ReadFile(filepath.Join(sysfsPciDevicesPath, bus.Name(), "device"))
vid := strings.TrimSpace(string(vidRaw))
pid := strings.TrimSpace(string(pidRaw))
// Loop for supported VPU type: tbh, kmb
for i, pciPid := range pidSearch {
// Loop for list of pid of supported device type
for _, pidVPU := range pciPid.pids {
if vid == vendorID && pid == pidVPU {
found[i]++
}
}
}
}
return found, nil
}
type devicePlugin struct {
deviceCtx interface{}
scanTicker *time.Ticker
scanDone chan bool
sharedDevNum int
}
type devicePluginUsb struct {
usbContext gousbContext
productIDs []int
vendorID int
}
type devicePluginPci struct {
sysfsPciDevicesPath string
vendorIDPCI string
productIDsPCI []PCIPidDeviceType
}
func newDevicePlugin(deviceCtx interface{}, sharedDevNum int) *devicePlugin {
if sharedDevNum < 1 {
klog.V(1).Info("The number of containers sharing the same VPU must greater than zero")
return nil
}
return &devicePlugin{
deviceCtx: deviceCtx,
sharedDevNum: sharedDevNum,
scanTicker: time.NewTicker(scanFrequency),
scanDone: make(chan bool, 1),
}
}
func (dp *devicePlugin) Scan(notifier dpapi.Notifier) error {
defer dp.scanTicker.Stop()
for {
devTree, err := dp.scan()
if err != nil {
return err
}
notifier.Notify(devTree)
select {
case <-dp.scanDone:
return nil
case <-dp.scanTicker.C:
}
}
}
func fileExists(filename string) bool {
info, err := os.Stat(filename)
if err == nil && info != nil {
return !info.IsDir()
}
// regard all other case as abnormal
return false
}
func (dp *devicePlugin) scanUsb(devTree *dpapi.DeviceTree) {
var nUsb int
// first check if HDDL sock is there
if !fileExists(hddlSockPath) {
return
}
deviceCtx, ok := dp.deviceCtx.(devicePluginUsb)
if !ok {
klog.V(4).Infof("wrong context %s", ok)
}
devs, err := deviceCtx.usbContext.OpenDevices(func(desc *gousb.DeviceDesc) bool {
thisVendor := desc.Vendor
thisProduct := desc.Product
for _, v := range deviceCtx.productIDs {
klog.V(4).Infof("checking %04x,%04x vs %s,%s", deviceCtx.vendorID, v, thisVendor.String(), thisProduct.String())
if (gousb.ID(deviceCtx.vendorID) == thisVendor) && (gousb.ID(v) == thisProduct) {
nUsb++
}
}
return false
})
defer func() {
for _, d := range devs {
d.Close()
}
}()
if err != nil {
klog.V(4).Infof("list usb device %s", err)
}
if nUsb > 0 {
for i := 0; i < nUsb*dp.sharedDevNum; i++ {
devID := fmt.Sprintf("hddl_service-%d", i)
// HDDL use a unix socket as service provider to manage /dev/myriad[n]
// Here we only expose an ION device to be allocated for HDDL client in containers
nodes := []pluginapi.DeviceSpec{
{
HostPath: ionDevNode,
ContainerPath: ionDevNode,
Permissions: "rw",
},
}
mounts := []pluginapi.Mount{
{
HostPath: hddlSockPath,
ContainerPath: hddlSockPath,
},
{
HostPath: hddlServicePath1,
ContainerPath: hddlServicePath1,
},
{
HostPath: hddlServicePath2,
ContainerPath: hddlServicePath2,
},
}
devTree.AddDevice(deviceType, devID, dpapi.NewDeviceInfo(pluginapi.Healthy, nodes, mounts, nil, nil))
}
}
}
func (dp *devicePlugin) scanPci(devTree *dpapi.DeviceTree) {
// first check if HDDL sock is there
if !fileExists(hddlSocketPci) {
return
}
deviceCtx, ok := dp.deviceCtx.(devicePluginPci)
if !ok {
klog.V(4).Infof("wrong context %s", ok)
}
// Get all PCI devices
pciFound, err := getPciDeviceCounts(deviceCtx.sysfsPciDevicesPath, deviceCtx.vendorIDPCI, deviceCtx.productIDsPCI)
if err != nil {
klog.V(4).Infof("list pci device %s", err)
}
// Mount VPU
for i := 0; i < len(pciFound); i++ {
deviceTypePci := deviceCtx.productIDsPCI[i].deviceType
deviceRatio := deviceCtx.productIDsPCI[i].ratio
// If device found
if remainder := pciFound[i] % deviceRatio; remainder == 0 {
count := pciFound[i] / deviceRatio
nodes := []pluginapi.DeviceSpec{
{
HostPath: xlinkDevNode,
ContainerPath: xlinkDevNode,
Permissions: "rw",
},
}
mounts := []pluginapi.Mount{
{
HostPath: hddlAlive,
ContainerPath: hddlAlive,
},
{
HostPath: hddlReady,
ContainerPath: hddlReady,
},
{
HostPath: hddlStartExit,
ContainerPath: hddlStartExit,
},
{
HostPath: hddlSocketPci,
ContainerPath: hddlSocketPci,
},
}
// Mount all devices
for i := 0; i < count; i++ {
devID := fmt.Sprintf("%s-device-%d", deviceTypePci, i)
// VPU pci device found and added to node
klog.V(1).Info(devID)
devTree.AddDevice(deviceTypePci, devID, dpapi.NewDeviceInfo(pluginapi.Healthy, nodes, mounts, nil, nil))
}
}
}
}
func (dp *devicePlugin) scan() (dpapi.DeviceTree, error) {
devTree := dpapi.NewDeviceTree()
switch dp.deviceCtx.(type) {
case devicePluginUsb:
dp.scanUsb(&devTree)
case devicePluginPci:
dp.scanPci(&devTree)
default:
}
return devTree, nil
}
func main() {
var sharedDevNum, scanMode int
flag.IntVar(&sharedDevNum, "shared-dev-num", 1, "number of containers sharing the same VPU device")
flag.IntVar(&scanMode, "mode", 1, "USB=1 PCI=2")
flag.Parse()
klog.V(1).Info("VPU device plugin started")
var plugin *devicePlugin
if scanMode == 1 {
// add lsusb here
ctx := gousb.NewContext()
defer ctx.Close()
verbosityLevel, err := strconv.Atoi(flag.CommandLine.Lookup("v").Value.String())
if err == nil {
// gousb (libusb) Debug levels are a 1:1 match to klog levels, just pass through.
ctx.Debug(verbosityLevel)
}
deviceCtxUsb := devicePluginUsb{usbContext: ctx, vendorID: vendorID, productIDs: productIDs}
plugin = newDevicePlugin(deviceCtxUsb, sharedDevNum)
} else if scanMode == 2 {
deviceCtxPci := devicePluginPci{sysfsPciDevicesPath: sysBusPCIDevice, vendorIDPCI: vendorIDIntel, productIDsPCI: productIDsPCI}
plugin = newDevicePlugin(deviceCtxPci, sharedDevNum)
}
if plugin == nil {
klog.Fatal("Cannot create device plugin, please check above error messages.")
}
manager := dpapi.NewManager(namespace, plugin)
manager.Run()
}

View File

@ -1,240 +0,0 @@
// Copyright 2019 Intel Corporation. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"flag"
"os"
"path/filepath"
"strconv"
"testing"
"github.com/google/gousb"
dpapi "github.com/intel/intel-device-plugins-for-kubernetes/pkg/deviceplugin"
"k8s.io/klog/v2"
)
func init() {
_ = flag.Set("v", "4")
}
type testCase struct {
productIDs []int
vendorID int
}
// OpenDevices tries to inject gousb compatible fake device info.
func (t *testCase) OpenDevices(opener func(desc *gousb.DeviceDesc) bool) ([]*gousb.Device, error) {
var ret []*gousb.Device
for _, p := range t.productIDs {
desc := &gousb.DeviceDesc{
Vendor: gousb.ID(t.vendorID),
Product: gousb.ID(p),
}
if opener(desc) {
// only fake desc is enough
ret = append(ret, &gousb.Device{Desc: desc})
}
}
return ret, nil
}
func createDevice(pciBusRootDir string, bdf string, vid string, pid string) error {
err := os.MkdirAll(filepath.Join(pciBusRootDir, bdf), 0755)
if err != nil {
return err
}
vidHex := append([]byte(vid), 0xa)
pidHex := append([]byte(pid), 0xa)
err = os.WriteFile(filepath.Join(pciBusRootDir, bdf, "vendor"), vidHex, 0400)
if err != nil {
return err
}
err = os.WriteFile(filepath.Join(pciBusRootDir, bdf, "device"), pidHex, 0400)
if err != nil {
return err
}
return nil
}
func createTestPCI(folder string, testPCI []PCIPidDeviceType) error {
var busNum = 1
var devNum = 3
//Loop for all supported device type
for _, pciPid := range testPCI {
//Loop for pid number
for _, pidVPU := range pciPid.pids {
//Create intended bus number based on ratio
for i := 0; i < devNum*pciPid.ratio; i++ {
if err := createDevice(folder, strconv.Itoa(busNum), vendorIDIntel, pidVPU); err != nil {
return err
}
busNum++
}
}
}
return nil
}
// fakeNotifier implements Notifier interface.
type fakeNotifier struct {
scanDone chan bool
tree dpapi.DeviceTree
}
// Notify stops plugin Scan.
func (n *fakeNotifier) Notify(newDeviceTree dpapi.DeviceTree) {
n.tree = newDeviceTree
n.scanDone <- true
}
func TestScanPci(t *testing.T) {
var fN fakeNotifier
f, err := os.Create(hddlSocketPci)
if err != nil {
t.Error("create fake hddl file failed")
}
//create a temporary folder to create fake devices files for PCI scanning
tmpPciDir, err := os.MkdirTemp("/tmp", "fake-pci-devices")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpPciDir)
//create supported PCI devices file
if err = createTestPCI(tmpPciDir, productIDsPCI); err != nil {
t.Fatal(err)
}
testPlugin := newDevicePlugin(devicePluginPci{sysfsPciDevicesPath: tmpPciDir, vendorIDPCI: vendorIDIntel, productIDsPCI: productIDsPCI}, 10)
if testPlugin == nil {
t.Fatal("vpu plugin test failed with newDevicePlugin().")
}
fN.scanDone = testPlugin.scanDone
err = testPlugin.Scan(&fN)
if err != nil {
t.Error("vpu plugin test failed with testPlugin.Scan()")
}
//Loop for all supported PCI device type
for _, pciPid := range productIDsPCI {
if len(fN.tree[pciPid.deviceType]) == 0 {
t.Error("vpu plugin test failed with testPlugin.Scan(): tree len is 0")
}
klog.V(4).Infof("tree len of pci %s is %d", pciPid.deviceType, len(fN.tree[pciPid.deviceType]))
}
//remove the hddl_service.sock and test with no hddl socket case
_ = f.Close()
_ = os.Remove("/var/tmp/hddl_service.sock")
testPlugin = newDevicePlugin(devicePluginPci{sysfsPciDevicesPath: tmpPciDir, vendorIDPCI: vendorIDIntel, productIDsPCI: productIDsPCI}, 10)
if testPlugin == nil {
t.Fatal("vpu plugin test failed with newDevicePlugin() in no hddl_service.sock case.")
}
fN.scanDone = testPlugin.scanDone
err = testPlugin.Scan(&fN)
if err != nil {
t.Error("vpu plugin test failed with testPlugin.Scan() in no hddl_service.sock case.")
}
if len(fN.tree[deviceType]) != 0 {
t.Error("vpu plugin test failed with testPlugin.Scan(): tree len should be 0 in no hddl_service.sock case.")
}
//test with sharedNum equals 0 case
testPlugin = newDevicePlugin(devicePluginPci{sysfsPciDevicesPath: tmpPciDir, vendorIDPCI: vendorIDIntel, productIDsPCI: productIDsPCI}, 0)
if testPlugin != nil {
t.Error("vpu plugin test fail: newDevicePlugin should fail with 0 sharedDevNum")
}
}
func TestScan(t *testing.T) {
var fN fakeNotifier
f, err := os.Create(hddlSockPath)
if err != nil {
t.Error("create fake hddl file failed")
}
//inject our fake gousbContext, just borrow vendorID and productIDs from main
tc := &testCase{
vendorID: vendorID,
}
//inject some productIDs that not match our target too
tc.productIDs = append(productIDs, 0xdead, 0xbeef)
testPlugin := newDevicePlugin(devicePluginUsb{usbContext: tc, vendorID: vendorID, productIDs: productIDs}, 10)
if testPlugin == nil {
t.Fatal("vpu plugin test failed with newDevicePlugin().")
}
fN.scanDone = testPlugin.scanDone
err = testPlugin.Scan(&fN)
if err != nil {
t.Error("vpu plugin test failed with testPlugin.Scan()")
}
if len(fN.tree[deviceType]) == 0 {
t.Error("vpu plugin test failed with testPlugin.Scan(): tree len is 0")
}
klog.V(4).Infof("tree len of usb is %d", len(fN.tree[deviceType]))
//remove the hddl_service.sock and test with no hddl socket case
_ = f.Close()
_ = os.Remove("/var/tmp/hddl_service.sock")
testPlugin = newDevicePlugin(devicePluginUsb{usbContext: tc, vendorID: vendorID, productIDs: productIDs}, 10)
if testPlugin == nil {
t.Fatal("vpu plugin test failed with newDevicePlugin() in no hddl_service.sock case.")
}
fN.scanDone = testPlugin.scanDone
err = testPlugin.Scan(&fN)
if err != nil {
t.Error("vpu plugin test failed with testPlugin.Scan() in no hddl_service.sock case.")
}
if len(fN.tree[deviceType]) != 0 {
t.Error("vpu plugin test failed with testPlugin.Scan(): tree len should be 0 in no hddl_service.sock case.")
}
//test with sharedNum equals 0 case
testPlugin = newDevicePlugin(devicePluginUsb{usbContext: tc, vendorID: vendorID, productIDs: productIDs}, 0)
if testPlugin != nil {
t.Error("vpu plugin test fail: newDevicePlugin should fail with 0 sharedDevNum")
}
}

View File

@ -1,22 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: intelvpu-demo-job
labels:
jobgroup: intelvpu-demo
spec:
template:
metadata:
labels:
jobgroup: intelvpu-demo
spec:
restartPolicy: Never
containers:
-
name: intelvpu-demo-job-1
image: intel/ubuntu-demo-openvino:devel
imagePullPolicy: IfNotPresent
command: [ "/do_classification.sh" ]
resources:
limits:
vpu.intel.com/hddl: 1

View File

@ -1,44 +0,0 @@
FROM ubuntu:18.04 as builder
ARG INSTALL_DIR=/opt/intel/openvino
ARG VERSION=2020.2.130
RUN apt update
RUN apt install -y gnupg2 curl sudo
RUN curl https://apt.repos.intel.com/openvino/2020/GPG-PUB-KEY-INTEL-OPENVINO-2020 | apt-key add -
RUN echo 'deb https://apt.repos.intel.com/openvino/2020 all main' > /etc/apt/sources.list.d/intel-openvino.list
RUN apt update
RUN apt install -y --no-install-recommends \
intel-openvino-ie-rt-hddl-ubuntu-bionic-$VERSION \
intel-openvino-ie-samples-$VERSION \
intel-openvino-setupvars-$VERSION \
intel-openvino-omz-dev-$VERSION \
intel-openvino-omz-tools-$VERSION \
intel-openvino-model-optimizer-$VERSION \
intel-openvino-ie-rt-cpu-ubuntu-bionic-$VERSION \
intel-openvino-opencv-etc-$VERSION \
intel-openvino-opencv-generic-$VERSION \
intel-openvino-opencv-lib-ubuntu-bionic-$VERSION \
intel-openvino-pot-$VERSION
RUN $INSTALL_DIR/install_dependencies/install_openvino_dependencies.sh
# build Inference Engine samples
RUN $INSTALL_DIR/deployment_tools/inference_engine/samples/cpp/build_samples.sh
RUN $INSTALL_DIR/deployment_tools/demo/demo_squeezenet_download_convert_run.sh
RUN cp /opt/intel/openvino/deployment_tools/demo/car.png /root && \
cp /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/plugins.xml /root/inference_engine_samples_build/intel64/Release/lib/ && \
cp /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libHDDLPlugin.so /root/inference_engine_samples_build/intel64/Release/lib/ && \
cp /lib/x86_64-linux-gnu/libusb-1.0.so.0 /root/inference_engine_samples_build/intel64/Release/lib/ && \
cp -r /opt/intel/openvino/deployment_tools/inference_engine/external/hddl /root && \
/bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && \
ldd /root/inference_engine_samples_build/intel64/Release/classification_sample_async" | grep opt | awk '{print $3}' | xargs -Iaaa cp aaa /root/inference_engine_samples_build/intel64/Release/lib/ && \
/bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && \
ldd /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libHDDLPlugin.so" | grep opt | awk '{print $3}' | xargs -Iaaa cp aaa /root/inference_engine_samples_build/intel64/Release/lib/
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y --no-install-recommends \
libjson-c3 \
libboost-filesystem1.65 \
libboost-thread1.65 && \
apt-get clean && rm -rf /var/lib/apt/lists/*
COPY do_classification.sh /
COPY --from=builder /root/ /root/

View File

@ -1,5 +0,0 @@
#!/bin/bash -xe
export HDDL_INSTALL_DIR=/root/hddl
export LD_LIBRARY_PATH=/root/inference_engine_samples_build/intel64/Release/lib/
/root/inference_engine_samples_build/intel64/Release/classification_sample_async -m /root/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -i /root/car.png -d HDDL

View File

@ -1,65 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: intel-vpu-plugin
labels:
app: intel-vpu-plugin
spec:
selector:
matchLabels:
app: intel-vpu-plugin
template:
metadata:
labels:
app: intel-vpu-plugin
spec:
automountServiceAccountToken: false
containers:
- name: intel-vpu-plugin
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
image: intel/intel-vpu-plugin:devel
imagePullPolicy: IfNotPresent
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
volumeMounts:
- name: devion
mountPath: /dev/ion
readOnly: true
- name: devfs
mountPath: /dev/bus/usb
readOnly: true
- name: sysfs1
mountPath: /sys/bus/usb
readOnly: true
- name: sysfs2
mountPath: /sys/devices
readOnly: true
- name: tmpfs
mountPath: /var/tmp
- name: kubeletsockets
mountPath: /var/lib/kubelet/device-plugins
volumes:
- name: devion
hostPath:
path: /dev/ion
type: CharDevice
- name: devfs
hostPath:
path: /dev/bus/usb
- name: sysfs1
hostPath:
path: /sys/bus/usb
- name: sysfs2
hostPath:
path: /sys/devices
- name: tmpfs
hostPath:
path: /var/tmp
- name: kubeletsockets
hostPath:
path: /var/lib/kubelet/device-plugins

View File

@ -1,2 +0,0 @@
resources:
- intel-vpu-plugin.yaml

View File

@ -1,4 +0,0 @@
resources:
- base
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

View File

@ -1,10 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: intel-vpu-plugin
spec:
template:
spec:
containers:
- name: intel-vpu-plugin
args: ["--mode=2"]

View File

@ -1,12 +0,0 @@
resources:
- ../../base/
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patches:
- path: volumes_patch.yaml
target:
group: apps
kind: DaemonSet
name: intel-vpu-plugin
version: v1
- path: add_command_args.yaml

View File

@ -1,12 +0,0 @@
- op: replace
path: /spec/template/spec/containers/0/volumeMounts/0/mountPath
value: /dev/xlnk
- op: replace
path : /spec/template/spec/containers/0/volumeMounts/0/name
value: devxlnk
- op: replace
path: /spec/template/spec/volumes/0/name
value: devxlnk
- op: replace
path: /spec/template/spec/volumes/0/hostPath/path
value: /dev/xlnk

View File

@ -16,4 +16,3 @@ Extensions
../cmd/qat_plugin/README.md
../cmd/sgx_plugin/README.md
../cmd/sgx_admissionwebhook/README.md
../cmd/vpu_plugin/README.md

1
go.mod
View File

@ -7,7 +7,6 @@ require (
github.com/go-ini/ini v1.67.0
github.com/go-logr/logr v1.2.4
github.com/google/go-cmp v0.5.9
github.com/google/gousb v1.1.2
github.com/klauspost/cpuid/v2 v2.2.5
github.com/onsi/ginkgo/v2 v2.12.1
github.com/onsi/gomega v1.28.0

2
go.sum
View File

@ -193,8 +193,6 @@ github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeN
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gousb v1.1.2 h1:1BwarNB3inFTFhPgUEfah4hwOPuDz/49I0uX8XNginU=
github.com/google/gousb v1.1.2/go.mod h1:GGWUkK0gAXDzxhwrzetW592aOmkkqSGcj5KLEgmCVUg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=