NFD v0.14+ doesn't support binary NFD hooks by default, so there is
a need to move the label creation away from the GPU nfdhook.
Move extended resource label creation to plugin, and drop labels that were
already marked deprecated (platform_gen, media_version etc.).
Drop init-container from deployment files and operator. It is still possible
to use an initcontainer, but the default deployments do not support it.
Signed-off-by: Tuomas Katila <tuomas.katila@intel.com>
With shared-dev-num and multiple i915s in the resource request,
try to find as many individual GPUs to expose to the container.
Previously, with multiple i915 resources, it was typical to
get only one GPU device in the container.
Co-authored-by: Eero Tamminen <eero.t.tamminen@intel.com>
Signed-off-by: Tuomas Katila <tuomas.katila@intel.com>
GPU plugin code assumes container paths to match host paths, and
container runtime prevents creating fake files under real paths. When
non-standard paths are used, devices can be faked for scalability
testing.
Note: If one wants to run both normal GPU plugin and faked one in same
cluster, all nodes providing fake "i915" resources should be labeled
differently from ones with real GPU plugin + devices, so that real GPU
workloads can be limited to correct nodes with a suitable
nodeSelector.
Signed-off-by: Eero Tamminen <eero.t.tamminen@intel.com>
Move reallocate logic to getpreferredallocation and simplify
allocate to use the kubelet's device ids.
Signed-off-by: Tuomas Katila <tuomas.katila@intel.com>
1. Implement PreferredAllocator interface.
2. Provide 3 preferred allocation policies: balancedPolicy, packedPolicy and nonePolicy.
3. Provide the cmdline interface: -allocation-policy balanced/packed/none, to select which preferred allocation policy to use.
4. Add operator support.
Co-authored-by: Mikko Ylinen <mikko.ylinen@intel.com>
Add govet-fieldalignment to .golangci.yml
Fix errors that come from adding govet-fieldalignment
- by reordering the fields of structs
- by putting nolint:govet annotations
Signed-off-by: Hyeongju Johannes Lee <hyeongju.lee@intel.com>
Update tool versions
Fix the errors and warnings originated from the update:
-Correct type deviceInfo (->DeviceInfo) to make it public
-Fix gpu_plugin.go and vpu_plugin_test.go where stylecheck errors occur
-Fix deprecation warnings
-Rename type 'PatcherManager' to 'Manager' to solve exported errors
-Rename type 'SgxMutator' to 'Mutator' to solve exported errors
Signed-off-by: Hyeongju Johannes Lee <hyeongju.lee@intel.com>
Go 1.16 release notes announced the deprecation of io/ioutil [1]. It's easy
for us to move to use what is was recommended so just do it.
[1] https://golang.org/doc/go1.16#ioutil
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
To help in:
* adding more CLI options in next and later commits, and
* to replace magic newDevicePlugin() input parameters with
explicitly named one(s)
NOTE: this has impact only for GPUs which are virtualized with SR-IOV.
Access to physical devices (PFs) is disabled for "i915" resource when
they have configured virtual devices (VFs).
This is because:
* GPU resources are expected to be evenly split between VFs in such
configurations
* But PF resource amount is expected to differ from VFs and typically
retain only enough resources (just few MB of RAM), to be able to
provide GPU metrics that are not available from VFs
* Neither the current GPU plugin, nor Kubernetes scheduling in
general, has proper support for heterogeneous GPUs (= capability
based scheduling)
Therefore "i915" resource needs to be limited to GPU devices with
homogeneous amount of resources, which in SR-IOV configurations is
expected to be the case only with VFs (when such are present).
Which mounts all (Intel) GPU devices to requesting container.
This is needed e.g. to get GPU metrics from the node. Requesting pod
does not know how many GPUs are on the node it gets assigned to, so
there needs to means to request them all.
(Only alternative for the new resource would be requesting Privileged
mode, which is clearly worse as that would grant pod access also to
all other devices and capabilities.)
This commit also:
* Adds "i915_monitoring" resource testing to: go test -v -run Scan
* Splits GPU plugin tests mock file system setup to a separate
createTestFiles() function because otherwise TestScan() does not
pass project's golangci-lint complexity limits
For every created device info, a new topology scan is performed in
the filesystem. The shared dev count was implemented so that for each
shared device, a new device info was created, which resulted in the
topology scan happening as many times per Scan-round, as there were
shared devs.
This fixes the issue by making the device info to be shared among the
shared devices.
Signed-off-by: Ukri Niemimuukko <ukri.niemimuukko@intel.com>
Move from fmt to klog for logging and debug.
Also add an extra info level message noting when we find
new devices.
Signed-off-by: Graham Whaley <graham.whaley@intel.com>
If we fail to scan for GPU devices (note, that is potentially
different from not finding any devices during a scan), then
warn on it, and go around the poll loop again. Do not treat
it as a fatal error or we might end up in a re-launch death
deploy loop...
Of course, getting a warning in your logs every 5s could also
be annoying, but is somewhat 'less fatal'.
Fixes: #260Fixes: #230
Signed-off-by: Graham Whaley <graham.whaley@intel.com>
Let a user know the plugin can't find any Intel GPU if that's
the case. It might be cumbersome to realize that the plugin runs
on a host which doesn't have any Intel GPUs.
Also make the code less nested for better readability.
The DRM driver of Intel i915 GPUs allows sharing one device
between many containers.
Make it possible to use the same device from different containers.
The exact number of containers sharing the same device can be limited
with the new option -shared-dev-num set to 1 by default.
closes#53
Every device plugin is supposed to implement PluginInterfaceServer
interface to be exposed as a gRPC service. But this functionality is
common for all our device plugins and can be hidden in a Manager
which manages all gRPC servers dynamically.
The only mandatory functionality that needs to be provided by a device
plugin and which differentiate one plugin from another is the code
scanning the host for devices present on it.
Refactor the internal deviceplugin package to accept only
one mandatory method implementation from device plugins - Scan().
In addition to that a device plugin can optionally implement a
PostAllocate() method which mutates responses returned by
PluginInterfaceServer.Allocate() method.
Also to narrow the gap between these device plugins and the
kubevirt's collection the naming scheme for resources has been changed.
Now device plugins provide a namespace for the device types they
operate with. E.g. for resources in format "color.example.com/<color>"
the namespace would be "color.example.com". So, the resource name
"intel.com/fpga-region-fffffff" becomes "fpga.intel.com/region-fffffff".
DRM control node is deprecated and removed by latest kernel.
This will skip possible drm control node found on host.
v2: Fix lint error
v3: Fix regex string
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>