![]() Using glog.Fatal produces stacktrace, which looks quite scary for this simple case: $ ./fpga_plugin -mode bla F0523 15:17:57.997937 11555 fpga_plugin.go:237] Wrong mode: bla goroutine 1 [running]: github.com/intel/intel-device-plugins-for-kubernetes/vendor/github.com/golang/glog.stacks(0xc420214000, 0xc42018e000, 0x42, 0x8f) /home/ed/go/src/github.com/intel/intel-device-plugins-for-kubernetes/vendor/github.com/golang/glog/glog.go:769 +0xcf github.com/intel/intel-device-plugins-for-kubernetes/vendor/github.com/golang/glog.(*loggingT).output(0xbf72c0, 0xc400000003, 0xc4200bea50, 0xba3309, 0xe, 0xed, 0x0) /home/ed/go/src/github.com/intel/intel-device-plugins-for-kubernetes/vendor/github.com/golang/glog/glog.go:720 +0x32d github.com/intel/intel-device-plugins-for-kubernetes/vendor/github.com/golang/glog.(*loggingT).printDepth(0xbf72c0, 0x7f4500000003, 0x1, 0xc420079ec8, 0x2, 0x2) /home/ed/go/src/github.com/intel/intel-device-plugins-for-kubernetes/vendor/github.com/golang/glog/glog.go:646 +0x129 github.com/intel/intel-device-plugins-for-kubernetes/vendor/github.com/golang/glog.(*loggingT).print(0xbf72c0, 0x3, 0xc420079ec8, 0x2, 0x2) /home/ed/go/src/github.com/intel/intel-device-plugins-for-kubernetes/vendor/github.com/golang/glog/glog.go:637 +0x5a github.com/intel/intel-device-plugins-for-kubernetes/vendor/github.com/golang/glog.Fatal(0xc420079ec8, 0x2, 0x2) /home/ed/go/src/github.com/intel/intel-device-plugins-for-kubernetes/vendor/github.com/golang/glog/glog.go:1128 +0x53 main.main() /home/ed/go/src/github.com/intel/intel-device-plugins-for-kubernetes/cmd/fpga_plugin/fpga_plugin.go:237 +0x5fb |
||
---|---|---|
build/boilerplate | ||
cmd | ||
demo | ||
internal/deviceplugin | ||
vendor | ||
.gitignore | ||
fpga.Dockerfile | ||
Gopkg.lock | ||
Gopkg.toml | ||
gpu.Dockerfile | ||
LICENSE | ||
Makefile | ||
README.md |
Intel GPU Device Plugin for Kubernetes
QuickStart: build and run plugin on the host
Prerequisites
- Computer with supported Intel GPU device running Linux
- Fully configured kubernetes node joined to the cluster
- Working Go environment
- Read access to the Intel device plugins git repository
Get source code
$ mkdir -p $GOPATH/src/github.com/intel/
$ cd $GOPATH/src/github.com/intel/
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes.git
Build device plugin
$ cd $GOPATH/src/github.com/intel/intel-device-plugins-for-kubernetes
$ make
The result plugin executable is cmd/gpu_plugin/gpu_plugin
Configure kubelet
This instruction has been written with the assumption that the target
Kubernetes cluster is installed and configured with the kubeadm
toolkit
from the official packages from http://kubernetes.io and the latest
tested version of Kubernetes is 1.9.
- Add DevicePlugins=true to the kubelet command line option --feature-gates
This can be done by creating a systemd drop-in for kubelet service in /etc/systemd/system/kubelet.service.d/ with the following content:
[Service]
Environment="KUBELET_EXTRA_ARGS=--feature-gates='DevicePlugins=true,HugePages=true'"
Check the man page for systemd.unit
for more details on systemd drop-ins.
- Restart kubelet service with the new options
$ systemctl restart kubelet
- Make sure kubelet socket exists in /var/lib/kubelet/device-plugins/
$ ls /var/lib/kubelet/device-plugins/kubelet.sock
/var/lib/kubelet/device-plugins/kubelet.sock
Run the plugin as administrator
$ sudo $GOPATH/src/github.com/intel/intel-device-plugins-for-kubernetes/cmd/gpu_plugin/gpu_plugin
GPU device plugin started
Adding '/dev/dri/card0' to GPU 'card0'
Adding '/dev/dri/controlD64' to GPU 'card0'
Adding '/dev/dri/renderD128' to GPU 'card0'
device-plugin start server at: /var/lib/kubelet/device-plugins/intelGPU.sock
device-plugin registered
device-plugin: ListAndWatch start
ListAndWatch: send devices &ListAndWatchResponse{Devices:[&Device{ID:card0,Health:Healthy,}],}
Check if the plugin is registered on master
$ kubectl describe node <node name> | grep intel.com/gpu
intel.com/gpu: 1
intel.com/gpu: 1
There are more sophisticated ways to run device plugins. Please, consider reading Device plugin deployment to understand how to do it.
Testing
-
Build a Docker image with beignet unit tests:
$ cd demo $ ./build-image.sh ubuntu-demo-opencl
This command will produce a Docker image named
ubuntu-demo-opencl
. -
Create a pod running unit tests off the local Docker image:
$ kubectl apply -f demo/intelgpu_pod.yaml
-
Observe the pod's logs:
$ kubectl logs intelgpu-demo-pod