mirror of
https://github.com/kubevirt/containerized-data-importer.git
synced 2025-06-03 06:30:22 +00:00
[release-v1.57] Backport main commits to 1.57 release branch v2 (#2785)
* dataimportcron: Pass dynamic credential support label (#2760) * dataimportcron: code change: Use better matchers in tests Signed-off-by: Andrej Krejcir <akrejcir@redhat.com> * dataimportcron: Pass dynamic credential support label The label is passed from DataImportCron to DataVolume and DataSource. Signed-off-by: Andrej Krejcir <akrejcir@redhat.com> --------- Signed-off-by: Andrej Krejcir <akrejcir@redhat.com> * Add DataImportCron snapshot sources docs (#2747) Signed-off-by: Alex Kalenyuk <akalenyu@redhat.com> * add akalenyu as approver, some others as reviewers (#2766) Signed-off-by: Michael Henriksen <mhenriks@redhat.com> * Run `make rpm-deps` (#2741) * Run make rpm-deps Signed-off-by: Maya Rashish <mrashish@redhat.com> * Avoid overlayfs error message by using vfs driver Signed-off-by: Maya Rashish <mrashish@redhat.com> --------- Signed-off-by: Maya Rashish <mrashish@redhat.com> * Fix Destructive test lane failure - missing pod following recreate of CDI (#2744) Signed-off-by: Alex Kalenyuk <akalenyu@redhat.com> * [WIP] Handle nil ptr in dataimportcron controller (#2769) Signed-off-by: Alex Kalenyuk <akalenyu@redhat.com> * Revert some gomega error checking that produce confusing output (#2772) One of these tests flakes, but the error is hard to debug because gomega will yell about `Unexpected non-nil/non-zero argument at index 0` Instead of showing the error. Apparently this is intended: https://github.com/onsi/gomega/pull/480/files#diff-e696deff1a5be83ad03053b772926cb325cede3b33222fa76c2bb1fcf2efd809R186-R190 Signed-off-by: Alex Kalenyuk <akalenyu@redhat.com> * Run bazelisk run //robots/cmd/uploader:uploader -- -workspace /home/prow/go/src/github.com/kubevirt/project-infra/../containerized-data-importer/WORKSPACE -dry-run=false (#2770) Signed-off-by: kubevirt-bot <kubevirtbot@redhat.com> * [CI] Add metrics name linter (#2774) Signed-off-by: Aviv Litman <alitman@redhat.com> --------- Signed-off-by: Andrej Krejcir <akrejcir@redhat.com> Signed-off-by: Alex Kalenyuk <akalenyu@redhat.com> Signed-off-by: Michael Henriksen <mhenriks@redhat.com> Signed-off-by: Maya Rashish <mrashish@redhat.com> Signed-off-by: kubevirt-bot <kubevirtbot@redhat.com> Signed-off-by: Aviv Litman <alitman@redhat.com> Co-authored-by: Andrej Krejcir <akrejcir@gmail.com> Co-authored-by: Michael Henriksen <mhenriks@redhat.com> Co-authored-by: Maya Rashish <mrashish@redhat.com> Co-authored-by: kubevirt-bot <kubevirtbot@redhat.com> Co-authored-by: Aviv Litman <64130977+avlitman@users.noreply.github.com>
This commit is contained in:
parent
b80ff58f9f
commit
50efcee3c2
8
Makefile
8
Makefile
@ -21,7 +21,8 @@
|
||||
goveralls \
|
||||
release-description \
|
||||
bazel-generate bazel-build bazel-build-images bazel-push-images \
|
||||
fossa
|
||||
fossa \
|
||||
lint-metrics
|
||||
|
||||
DOCKER?=1
|
||||
ifeq (${DOCKER}, 1)
|
||||
@ -79,7 +80,7 @@ test-functional: build-functest
|
||||
./hack/build/run-functional-tests.sh ${WHAT} "${TEST_ARGS}"
|
||||
|
||||
# test-lint runs gofmt and golint tests against src files
|
||||
test-lint:
|
||||
test-lint: lint-metrics
|
||||
${DO_BAZ} "./hack/build/run-lint-checks.sh"
|
||||
"./hack/ci/language.sh"
|
||||
|
||||
@ -166,6 +167,9 @@ build-docgen:
|
||||
fossa:
|
||||
${DO_BAZ} "FOSSA_TOKEN_FILE=${FOSSA_TOKEN_FILE} PULL_BASE_REF=${PULL_BASE_REF} CI=${CI} ./hack/fossa.sh"
|
||||
|
||||
lint-metrics:
|
||||
./hack/ci/prom_metric_linter.sh --operator-name="kubevirt" --sub-operator-name="cdi"
|
||||
|
||||
help:
|
||||
@echo "Usage: make [Targets ...]"
|
||||
@echo " all "
|
||||
|
@ -5,8 +5,13 @@ aliases:
|
||||
- aglitke
|
||||
- awels
|
||||
- mhenriks
|
||||
- akalenyu
|
||||
code-reviewers:
|
||||
- aglitke
|
||||
- awels
|
||||
- mhenriks
|
||||
- maya-r
|
||||
- akalenyu
|
||||
- arnongilboa
|
||||
- ShellyKa13
|
||||
- alromeros
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Automated OS image import, poll and update
|
||||
|
||||
CDI supports automating OS image import, poll and update, keeping OS images up-to-date according to the given `schedule`. On the first time a `DataImportCron` is scheduled, the controller will import the source image. On any following scheduled poll, if the source image digest (sha256) has updated, the controller will import it to a new `PVC` in the `DataImportCron` namespace, and update the managed `DataSource` to point that `PVC`. A garbage collector (`garbageCollect: Outdated` enabled by default) is responsible to keep the last `importsToKeep` (3 by default) imported `PVCs` per `DataImportCron`, and delete older ones.
|
||||
CDI supports automating OS image import, poll and update, keeping OS images up-to-date according to the given `schedule`. On the first time a `DataImportCron` is scheduled, the controller will import the source image. On any following scheduled poll, if the source image digest (sha256) has updated, the controller will import it to a new [*source*](#dataimportcron-source-formats) in the `DataImportCron` namespace, and update the managed `DataSource` to point to the newly created source. A garbage collector (`garbageCollect: Outdated` enabled by default) is responsible to keep the last `importsToKeep` (3 by default) imported sources per `DataImportCron`, and delete older ones.
|
||||
|
||||
See design doc [here](https://github.com/kubevirt/community/blob/main/design-proposals/golden-image-delivery-and-update-pipeline.md)
|
||||
|
||||
@ -29,7 +29,7 @@ spec:
|
||||
managedDataSource: fedora
|
||||
```
|
||||
|
||||
A `DataVolume` can use a `sourceRef` referring to a `DataSource`, instead of the `source`, so whenever created it will use the updated referred `PVC` similarly to a `source.PVC`.
|
||||
A `DataVolume` can use a `sourceRef` referring to a `DataSource`, instead of the `source`, so whenever created it will use the latest imported source similarly to specifying `dv.spec.source`.
|
||||
|
||||
```yaml
|
||||
apiVersion: cdi.kubevirt.io/v1beta1
|
||||
@ -84,4 +84,28 @@ Or on CRC:
|
||||
* oc import-image cirros-is -n openshift-virtualization-os-images --from=kubevirt/cirros-container-disk-demo --scheduled --confirm
|
||||
* oc set image-lookup cirros-is -n openshift-virtualization-os-images
|
||||
|
||||
More information on image streams is available [here](https://docs.openshift.com/container-platform/4.8/openshift_images/image-streams-manage.html) and [here](https://www.tutorialworks.com/openshift-imagestreams).
|
||||
More information on image streams is available [here](https://docs.openshift.com/container-platform/4.13/openshift_images/image-streams-manage.html) and [here](https://www.tutorialworks.com/openshift-imagestreams).
|
||||
|
||||
## DataImportCron source formats
|
||||
|
||||
* PersistentVolumeClaim
|
||||
* VolumeSnapshot
|
||||
|
||||
DataImportCron was originally designed to only maintain PVC sources,
|
||||
However, for certain storage types, we know that snapshots sources scale better.
|
||||
Some details and examples can be found in [clone-from-volumesnapshot-source](./clone-from-volumesnapshot-source.md).
|
||||
|
||||
We keep this provisioner-specific information on the [StorageProfile](./storageprofile.md) object for each provisioner at the `dataImportCronSourceFormat` field (possible values are `snapshot`/`pvc`), which tells the DataImportCron which type of source is preferred for the provisioner.
|
||||
|
||||
Some provisioners like ceph rbd are opted in automatically.
|
||||
To opt-in manually, one must edit the `StorageProfile`:
|
||||
```yaml
|
||||
apiVersion: cdi.kubevirt.io/v1beta1
|
||||
kind: StorageProfile
|
||||
metadata:
|
||||
...
|
||||
spec:
|
||||
dataImportCronSourceFormat: snapshot
|
||||
```
|
||||
|
||||
To ensure smooth transition, existing DataImportCrons can be switchd to maintaining snapshots instead of PVCs by updating their corresponding storage profiles.
|
||||
|
1
go.mod
1
go.mod
@ -20,6 +20,7 @@ require (
|
||||
github.com/klauspost/compress v1.14.2
|
||||
github.com/kubernetes-csi/external-snapshotter/client/v6 v6.0.1
|
||||
github.com/kubernetes-csi/lib-volume-populator v1.2.0
|
||||
github.com/kubevirt/monitoring/pkg/metrics/parser v0.0.0-20230627123556-81a891d4462a
|
||||
github.com/onsi/ginkgo v1.16.5
|
||||
github.com/onsi/gomega v1.24.1
|
||||
github.com/openshift/api v0.0.0-20230406152840-ce21e3fe5da2
|
||||
|
2
go.sum
2
go.sum
@ -846,6 +846,8 @@ github.com/kubernetes-csi/external-snapshotter/client/v6 v6.0.1 h1:OqBS3UAo3eGWp
|
||||
github.com/kubernetes-csi/external-snapshotter/client/v6 v6.0.1/go.mod h1:tnHiLn3P10N95fjn7O40QH5ovN0EFGAxqdTpUMrX6bU=
|
||||
github.com/kubernetes-csi/lib-volume-populator v1.2.0 h1:7ooY7P/5xEMNKQS1NwcqipUF1FMD2uGBjp13UGQmGpY=
|
||||
github.com/kubernetes-csi/lib-volume-populator v1.2.0/go.mod h1:euAJwBP1NcKCm4ifQLmPgwJvlakPjGLDbbSvchlUr3I=
|
||||
github.com/kubevirt/monitoring/pkg/metrics/parser v0.0.0-20230627123556-81a891d4462a h1:cdX+oxWw1lJDS3EchP+7Oz1XbErk4r7ffVJu1b1MKgI=
|
||||
github.com/kubevirt/monitoring/pkg/metrics/parser v0.0.0-20230627123556-81a891d4462a/go.mod h1:qGj2agzgwQ27nYhP3xhLs+IBzE5+ALNUg8bDfMcwPqo=
|
||||
github.com/kylelemons/godebug v0.0.0-20160406211939-eadb3ce320cb/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k=
|
||||
github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k=
|
||||
github.com/leanovate/gopter v0.2.4/go.mod h1:gNcbPWNEWRe4lm+bycKqxUYoH5uoVje5SkOJ3uoLer8=
|
||||
|
68
hack/ci/prom_metric_linter.sh
Executable file
68
hack/ci/prom_metric_linter.sh
Executable file
@ -0,0 +1,68 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
#
|
||||
# This file is part of the KubeVirt project
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Copyright 2023 Red Hat, Inc.
|
||||
#
|
||||
#
|
||||
set -e
|
||||
|
||||
linter_image_tag="v0.0.1"
|
||||
|
||||
PROJECT_ROOT="$(readlink -e "$(dirname "${BASH_SOURCE[0]}")"/../../)"
|
||||
export METRICS_COLLECTOR_PATH="${METRICS_COLLECTOR_PATH:-${PROJECT_ROOT}/tools/prom-metrics-collector}"
|
||||
|
||||
if [[ ! -d "$METRICS_COLLECTOR_PATH" ]]; then
|
||||
echo "Invalid METRICS_COLLECTOR_PATH: $METRICS_COLLECTOR_PATH is not a valid directory path"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Parse command-line arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--operator-name=*)
|
||||
operator_name="${1#*=}"
|
||||
shift
|
||||
;;
|
||||
--sub-operator-name=*)
|
||||
sub_operator_name="${1#*=}"
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
echo "Invalid argument: $1"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Get the metrics list
|
||||
go build -o _out/prom-metrics-collector "$METRICS_COLLECTOR_PATH/..."
|
||||
json_output=$(_out/prom-metrics-collector 2>/dev/null)
|
||||
|
||||
# Select container runtime
|
||||
source "${PROJECT_ROOT}"/hack/build/common.sh
|
||||
|
||||
# Run the linter by using the prom-metrics-linter Docker container
|
||||
errors=$($CDI_CRI run -i "quay.io/kubevirt/prom-metrics-linter:$linter_image_tag" \
|
||||
--metric-families="$json_output" \
|
||||
--operator-name="$operator_name" \
|
||||
--sub-operator-name="$sub_operator_name" 2>/dev/null)
|
||||
|
||||
# Check if there were any errors, if yes print and fail
|
||||
if [[ $errors != "" ]]; then
|
||||
echo "$errors"
|
||||
exit 1
|
||||
fi
|
@ -291,6 +291,9 @@ const (
|
||||
// LabelDefaultPreferenceKind provides a default kind of either VirtualMachineClusterPreference or VirtualMachinePreference
|
||||
LabelDefaultPreferenceKind = "instancetype.kubevirt.io/default-preference-kind"
|
||||
|
||||
// LabelDynamicCredentialSupport specifies if the OS supports updating credentials at runtime.
|
||||
LabelDynamicCredentialSupport = "kubevirt.io/dynamic-credentials-support"
|
||||
|
||||
// ProgressDone this means we are DONE
|
||||
ProgressDone = "100.0%"
|
||||
|
||||
|
@ -336,9 +336,11 @@ func (r *DataImportCronReconciler) update(ctx context.Context, dataImportCron *c
|
||||
}
|
||||
|
||||
handlePopulatedPvc := func() error {
|
||||
if pvc != nil {
|
||||
if err := r.updateSource(ctx, dataImportCron, pvc); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
importSucceeded = true
|
||||
if err := r.handleCronFormat(ctx, dataImportCron, format, dvStorageClass); err != nil {
|
||||
return err
|
||||
@ -570,6 +572,8 @@ func (r *DataImportCronReconciler) updateDataSource(ctx context.Context, dataImp
|
||||
passCronLabelToDataSource(dataImportCron, dataSource, cc.LabelDefaultPreference)
|
||||
passCronLabelToDataSource(dataImportCron, dataSource, cc.LabelDefaultPreferenceKind)
|
||||
|
||||
passCronLabelToDataSource(dataImportCron, dataSource, cc.LabelDynamicCredentialSupport)
|
||||
|
||||
sourcePVC := dataImportCron.Status.LastImportedPVC
|
||||
populateDataSource(format, dataSource, sourcePVC)
|
||||
|
||||
@ -1257,6 +1261,8 @@ func (r *DataImportCronReconciler) newSourceDataVolume(cron *cdiv1.DataImportCro
|
||||
passCronLabelToDv(cron, dv, cc.LabelDefaultPreference)
|
||||
passCronLabelToDv(cron, dv, cc.LabelDefaultPreferenceKind)
|
||||
|
||||
passCronLabelToDv(cron, dv, cc.LabelDynamicCredentialSupport)
|
||||
|
||||
return dv
|
||||
}
|
||||
|
||||
|
@ -705,7 +705,7 @@ var _ = Describe("All DataImportCron Tests", func() {
|
||||
Entry("has no tag", imageStreamName, 1),
|
||||
)
|
||||
|
||||
It("should pass through defaultInstancetype and defaultPreference metadata to DataVolume and DataSource", func() {
|
||||
It("should pass through metadata to DataVolume and DataSource", func() {
|
||||
cron = newDataImportCron(cronName)
|
||||
cron.Annotations[AnnSourceDesiredDigest] = testDigest
|
||||
|
||||
@ -714,6 +714,7 @@ var _ = Describe("All DataImportCron Tests", func() {
|
||||
cron.Labels[cc.LabelDefaultInstancetypeKind] = cc.LabelDefaultInstancetypeKind
|
||||
cron.Labels[cc.LabelDefaultPreference] = cc.LabelDefaultPreference
|
||||
cron.Labels[cc.LabelDefaultPreferenceKind] = cc.LabelDefaultPreferenceKind
|
||||
cron.Labels[cc.LabelDynamicCredentialSupport] = "true"
|
||||
|
||||
reconciler = createDataImportCronReconciler(cron)
|
||||
_, err := reconciler.Reconcile(context.TODO(), cronReq)
|
||||
@ -728,25 +729,21 @@ var _ = Describe("All DataImportCron Tests", func() {
|
||||
dvName := imports[0].DataVolumeName
|
||||
Expect(dvName).ToNot(BeEmpty())
|
||||
|
||||
ExpectInstancetypeLabels := func(labels map[string]string) {
|
||||
Expect(labels).ToNot(BeEmpty())
|
||||
Expect(labels).Should(ContainElement(cc.LabelDefaultInstancetype))
|
||||
Expect(labels[cc.LabelDefaultInstancetype]).Should(Equal(cc.LabelDefaultInstancetype))
|
||||
Expect(labels).Should(ContainElement(cc.LabelDefaultInstancetypeKind))
|
||||
Expect(labels[cc.LabelDefaultInstancetypeKind]).Should(Equal(cc.LabelDefaultInstancetypeKind))
|
||||
Expect(labels).Should(ContainElement(cc.LabelDefaultPreference))
|
||||
Expect(labels[cc.LabelDefaultPreference]).Should(Equal(cc.LabelDefaultPreference))
|
||||
Expect(labels).Should(ContainElement(cc.LabelDefaultPreferenceKind))
|
||||
Expect(labels[cc.LabelDefaultPreferenceKind]).Should(Equal(cc.LabelDefaultPreferenceKind))
|
||||
expectLabels := func(labels map[string]string) {
|
||||
ExpectWithOffset(1, labels).To(HaveKeyWithValue(cc.LabelDefaultInstancetype, cc.LabelDefaultInstancetype))
|
||||
ExpectWithOffset(1, labels).To(HaveKeyWithValue(cc.LabelDefaultInstancetypeKind, cc.LabelDefaultInstancetypeKind))
|
||||
ExpectWithOffset(1, labels).To(HaveKeyWithValue(cc.LabelDefaultPreference, cc.LabelDefaultPreference))
|
||||
ExpectWithOffset(1, labels).To(HaveKeyWithValue(cc.LabelDefaultPreferenceKind, cc.LabelDefaultPreferenceKind))
|
||||
ExpectWithOffset(1, labels).To(HaveKeyWithValue(cc.LabelDynamicCredentialSupport, "true"))
|
||||
}
|
||||
|
||||
dv := &cdiv1.DataVolume{}
|
||||
Expect(reconciler.client.Get(context.TODO(), dvKey(dvName), dv)).To(Succeed())
|
||||
ExpectInstancetypeLabels(dv.Labels)
|
||||
expectLabels(dv.Labels)
|
||||
|
||||
dataSource = &cdiv1.DataSource{}
|
||||
Expect(reconciler.client.Get(context.TODO(), dataSourceKey(cron), dataSource)).To(Succeed())
|
||||
ExpectInstancetypeLabels(dataSource.Labels)
|
||||
expectLabels(dataSource.Labels)
|
||||
})
|
||||
|
||||
Context("Snapshot source format", func() {
|
||||
|
@ -245,13 +245,14 @@ var _ = Describe("All DataVolume Tests", func() {
|
||||
Expect(dv.Status.Progress).To(BeEquivalentTo("13.45%"))
|
||||
})
|
||||
|
||||
It("Should pass instancetype labels from DV to PVC", func() {
|
||||
It("Should pass labels from DV to PVC", func() {
|
||||
dv := NewImportDataVolume("test-dv")
|
||||
dv.Labels = map[string]string{}
|
||||
dv.Labels[LabelDefaultInstancetype] = LabelDefaultInstancetype
|
||||
dv.Labels[LabelDefaultInstancetypeKind] = LabelDefaultInstancetypeKind
|
||||
dv.Labels[LabelDefaultPreference] = LabelDefaultPreference
|
||||
dv.Labels[LabelDefaultPreferenceKind] = LabelDefaultPreferenceKind
|
||||
dv.Labels[LabelDynamicCredentialSupport] = "true"
|
||||
|
||||
reconciler = createImportReconciler(dv)
|
||||
_, err := reconciler.Reconcile(context.TODO(), reconcile.Request{NamespacedName: types.NamespacedName{Name: "test-dv", Namespace: metav1.NamespaceDefault}})
|
||||
@ -262,10 +263,11 @@ var _ = Describe("All DataVolume Tests", func() {
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(pvc.Name).To(Equal("test-dv"))
|
||||
Expect(pvc.Labels[LabelDefaultInstancetype]).To(Equal(LabelDefaultInstancetype))
|
||||
Expect(pvc.Labels[LabelDefaultInstancetypeKind]).To(Equal(LabelDefaultInstancetypeKind))
|
||||
Expect(pvc.Labels[LabelDefaultPreference]).To(Equal(LabelDefaultPreference))
|
||||
Expect(pvc.Labels[LabelDefaultPreferenceKind]).To(Equal(LabelDefaultPreferenceKind))
|
||||
Expect(pvc.Labels).To(HaveKeyWithValue(LabelDefaultInstancetype, LabelDefaultInstancetype))
|
||||
Expect(pvc.Labels).To(HaveKeyWithValue(LabelDefaultInstancetypeKind, LabelDefaultInstancetypeKind))
|
||||
Expect(pvc.Labels).To(HaveKeyWithValue(LabelDefaultPreference, LabelDefaultPreference))
|
||||
Expect(pvc.Labels).To(HaveKeyWithValue(LabelDefaultPreferenceKind, LabelDefaultPreferenceKind))
|
||||
Expect(pvc.Labels).To(HaveKeyWithValue(LabelDynamicCredentialSupport, "true"))
|
||||
})
|
||||
|
||||
It("Should set params on a PVC from import DV.PVC", func() {
|
||||
|
896
rpm/BUILD.bazel
896
rpm/BUILD.bazel
File diff suppressed because it is too large
Load Diff
@ -2483,9 +2483,10 @@ var _ = Describe("all clone tests", func() {
|
||||
cdiCr = crList.Items[0]
|
||||
|
||||
By("[BeforeEach] Forcing Host Assisted cloning")
|
||||
var cloneStrategy cdiv1.CDICloneStrategy = cdiv1.CloneStrategyHostAssisted
|
||||
cloneStrategy := cdiv1.CloneStrategyHostAssisted
|
||||
cdiCr.Spec.CloneStrategyOverride = &cloneStrategy
|
||||
Expect(f.CdiClient.CdiV1beta1().CDIs().Update(context.TODO(), &cdiCr, metav1.UpdateOptions{})).Error().ToNot(HaveOccurred())
|
||||
_, err = f.CdiClient.CdiV1beta1().CDIs().Update(context.TODO(), &cdiCr, metav1.UpdateOptions{})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(utils.WaitForCDICrCloneStrategy(f.CdiClient, cloneStrategy)).To(Succeed())
|
||||
})
|
||||
@ -2507,7 +2508,8 @@ var _ = Describe("all clone tests", func() {
|
||||
|
||||
newCdiCr := crList.Items[0]
|
||||
newCdiCr.Spec = *cdiCrSpec
|
||||
Expect(f.CdiClient.CdiV1beta1().CDIs().Update(context.TODO(), &newCdiCr, metav1.UpdateOptions{})).Error().ToNot(HaveOccurred())
|
||||
_, err = f.CdiClient.CdiV1beta1().CDIs().Update(context.TODO(), &newCdiCr, metav1.UpdateOptions{})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
if cdiCrSpec.CloneStrategyOverride == nil {
|
||||
err = utils.WaitForCDICrCloneStrategyNil(f.CdiClient)
|
||||
@ -3229,7 +3231,8 @@ func VerifyDisabledGC(f *framework.Framework, dvName, dvNamespace string) {
|
||||
Expect(err).NotTo(HaveOccurred())
|
||||
return log
|
||||
}, timeout, pollingInterval).Should(ContainSubstring(matchString))
|
||||
Expect(f.CdiClient.CdiV1beta1().DataVolumes(dvNamespace).Get(context.TODO(), dvName, metav1.GetOptions{})).Error().ToNot(HaveOccurred())
|
||||
_, err := f.CdiClient.CdiV1beta1().DataVolumes(dvNamespace).Get(context.TODO(), dvName, metav1.GetOptions{})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
// EnableGcAndAnnotateLegacyDv enables garbage collection, annotates the DV and verifies it is garbage collected
|
||||
@ -3250,7 +3253,8 @@ func EnableGcAndAnnotateLegacyDv(f *framework.Framework, dvName, dvNamespace str
|
||||
|
||||
By("Add true DeleteAfterCompletion annotation to DV")
|
||||
controller.AddAnnotation(dv, controller.AnnDeleteAfterCompletion, "true")
|
||||
Expect(f.CdiClient.CdiV1beta1().DataVolumes(dvNamespace).Update(context.TODO(), dv, metav1.UpdateOptions{})).Error().ToNot(HaveOccurred())
|
||||
_, err = f.CdiClient.CdiV1beta1().DataVolumes(dvNamespace).Update(context.TODO(), dv, metav1.UpdateOptions{})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
VerifyGC(f, dvName, dvNamespace, false, nil)
|
||||
}
|
||||
|
||||
|
@ -38,7 +38,7 @@ const (
|
||||
|
||||
var _ = Describe("DataImportCron", func() {
|
||||
var (
|
||||
f = framework.NewFramework(namespacePrefix)
|
||||
f = framework.NewFramework("dataimportcron-func-test")
|
||||
log = logf.Log.WithName("dataimportcron_test")
|
||||
dataSourceName = "datasource-test"
|
||||
pollerPodName = "poller"
|
||||
|
@ -610,6 +610,7 @@ var _ = Describe("ALL Operator tests", func() {
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "cdi-operator",
|
||||
Namespace: f.CdiInstallNs,
|
||||
Labels: currentCdiOperatorDeployment.Labels,
|
||||
},
|
||||
Spec: currentCdiOperatorDeployment.Spec,
|
||||
}
|
||||
@ -940,6 +941,7 @@ var _ = Describe("ALL Operator tests", func() {
|
||||
var _ = Describe("Priority class tests", func() {
|
||||
var (
|
||||
cdi *cdiv1.CDI
|
||||
cdiPods *corev1.PodList
|
||||
systemClusterCritical = cdiv1.CDIPriorityClass("system-cluster-critical")
|
||||
osUserCrit = &schedulev1.PriorityClass{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
@ -960,6 +962,7 @@ var _ = Describe("ALL Operator tests", func() {
|
||||
}
|
||||
|
||||
BeforeEach(func() {
|
||||
cdiPods = getCDIPods(f)
|
||||
cdi = getCDI(f)
|
||||
if cdi.Spec.PriorityClass != nil {
|
||||
By(fmt.Sprintf("Current priority class is: [%s]", *cdi.Spec.PriorityClass))
|
||||
@ -1007,6 +1010,7 @@ var _ = Describe("ALL Operator tests", func() {
|
||||
return checkLogForRegEx(logIsLeaderRegex, log)
|
||||
}, 2*time.Minute, 1*time.Second).Should(BeTrue())
|
||||
|
||||
waitCDI(f, cr, cdiPods)
|
||||
})
|
||||
|
||||
It("should use kubernetes priority class if set", func() {
|
||||
@ -1140,8 +1144,14 @@ func waitCDI(f *framework.Framework, cr *cdiv1.CDI, cdiPods *corev1.PodList) {
|
||||
By("Waiting for there to be as many CDI pods as before")
|
||||
Eventually(func() bool {
|
||||
newCdiPods = getCDIPods(f)
|
||||
By(fmt.Sprintf("number of cdi pods: %d\n new number of cdi pods: %d\n", len(cdiPods.Items), len(newCdiPods.Items)))
|
||||
return len(cdiPods.Items) == len(newCdiPods.Items)
|
||||
fmt.Fprintf(GinkgoWriter, "number of cdi pods: %d\n new number of cdi pods: %d\n", len(cdiPods.Items), len(newCdiPods.Items))
|
||||
for _, pod := range cdiPods.Items {
|
||||
fmt.Fprintf(GinkgoWriter, "old pod %s/%s\n", pod.Namespace, pod.Name)
|
||||
}
|
||||
for _, pod := range newCdiPods.Items {
|
||||
fmt.Fprintf(GinkgoWriter, "new pod %s/%s\n", pod.Namespace, pod.Name)
|
||||
}
|
||||
return len(newCdiPods.Items) == len(cdiPods.Items)
|
||||
}, 5*time.Minute, 2*time.Second).Should(BeTrue())
|
||||
|
||||
for _, newCdiPod := range newCdiPods.Items {
|
||||
|
@ -113,8 +113,8 @@ function pushImages {
|
||||
|
||||
update-ca-trust
|
||||
|
||||
#remove storage.conf if exists
|
||||
rm -rf /etc/containers/storage.conf
|
||||
# Avoid 'overlay' is not supported over overlayfs error
|
||||
sed -i 's,driver =.*,driver = "vfs",' /etc/containers/storage.conf
|
||||
|
||||
#building using buildah requires a properly installed shadow-utils package (which in turn requires SETFCAP)
|
||||
rpm --restore shadow-utils 2>/dev/null
|
||||
|
78
tools/prom-metrics-collector/metrics_collector.go
Normal file
78
tools/prom-metrics-collector/metrics_collector.go
Normal file
@ -0,0 +1,78 @@
|
||||
/*
|
||||
* This file is part of the KubeVirt project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*
|
||||
* Copyright 2023 Red Hat, Inc.
|
||||
*
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
parser "github.com/kubevirt/monitoring/pkg/metrics/parser"
|
||||
"kubevirt.io/containerized-data-importer/pkg/monitoring"
|
||||
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
)
|
||||
|
||||
// excludedMetrics defines the metrics to ignore,
|
||||
// open issue:https://github.com/kubevirt/containerized-data-importer/issues/2773
|
||||
// Do not add metrics to this list!
|
||||
var excludedMetrics = map[string]struct{}{
|
||||
"clone_progress": {},
|
||||
"kubevirt_cdi_operator_up_total": {},
|
||||
"kubevirt_cdi_incomplete_storageprofiles_total": {},
|
||||
}
|
||||
|
||||
func recordRulesDescToMetricList(mdl []monitoring.RecordRulesDesc) []monitoring.MetricOpts {
|
||||
res := make([]monitoring.MetricOpts, len(mdl))
|
||||
for i, md := range mdl {
|
||||
res[i] = metricDescriptionToMetric(md)
|
||||
}
|
||||
|
||||
return res
|
||||
}
|
||||
|
||||
func metricDescriptionToMetric(rrd monitoring.RecordRulesDesc) monitoring.MetricOpts {
|
||||
return monitoring.MetricOpts{
|
||||
Name: rrd.Opts.Name,
|
||||
Help: rrd.Opts.Help,
|
||||
Type: rrd.Opts.Type,
|
||||
}
|
||||
}
|
||||
|
||||
// ReadMetrics read and parse the metrics to a MetricFamily
|
||||
func ReadMetrics() []*dto.MetricFamily {
|
||||
cdiMetrics := recordRulesDescToMetricList(monitoring.GetRecordRulesDesc(""))
|
||||
for _, opts := range monitoring.MetricOptsList {
|
||||
cdiMetrics = append(cdiMetrics, opts)
|
||||
}
|
||||
metricsList := make([]parser.Metric, len(cdiMetrics))
|
||||
var metricFamily []*dto.MetricFamily
|
||||
for i, cdiMetric := range cdiMetrics {
|
||||
metricsList[i] = parser.Metric{
|
||||
Name: cdiMetric.Name,
|
||||
Help: cdiMetric.Help,
|
||||
Type: cdiMetric.Type,
|
||||
}
|
||||
}
|
||||
for _, cdiMetric := range metricsList {
|
||||
// Remove ignored metrics from all rules
|
||||
if _, isExcludedMetric := excludedMetrics[cdiMetric.Name]; !isExcludedMetric {
|
||||
mf := parser.CreateMetricFamily(cdiMetric)
|
||||
metricFamily = append(metricFamily, mf)
|
||||
}
|
||||
}
|
||||
return metricFamily
|
||||
}
|
37
tools/prom-metrics-collector/metrics_json_generator.go
Normal file
37
tools/prom-metrics-collector/metrics_json_generator.go
Normal file
@ -0,0 +1,37 @@
|
||||
/*
|
||||
* This file is part of the KubeVirt project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*
|
||||
* Copyright 2023 Red Hat, Inc.
|
||||
*
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
)
|
||||
|
||||
func main() {
|
||||
metricFamilies := ReadMetrics()
|
||||
|
||||
jsonBytes, err := json.Marshal(metricFamilies)
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
fmt.Println(string(jsonBytes)) // Write the JSON string to standard output
|
||||
}
|
52
vendor/github.com/kubevirt/monitoring/pkg/metrics/parser/metrics_parser.go
generated
vendored
Normal file
52
vendor/github.com/kubevirt/monitoring/pkg/metrics/parser/metrics_parser.go
generated
vendored
Normal file
@ -0,0 +1,52 @@
|
||||
/*
|
||||
* This file is part of the KubeVirt project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*
|
||||
* Copyright 2023 Red Hat, Inc.
|
||||
*
|
||||
*/
|
||||
|
||||
package parser
|
||||
|
||||
import (
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
)
|
||||
|
||||
// Metric represents a Prometheus metric
|
||||
type Metric struct {
|
||||
Name string `json:"name,omitempty"`
|
||||
Help string `json:"help,omitempty"`
|
||||
Type string `json:"type,omitempty"`
|
||||
}
|
||||
|
||||
// Set the correct metric type for creating MetricFamily
|
||||
func CreateMetricFamily(m Metric) *dto.MetricFamily {
|
||||
metricType := dto.MetricType_UNTYPED
|
||||
|
||||
switch m.Type {
|
||||
case "Counter":
|
||||
metricType = dto.MetricType_COUNTER
|
||||
case "Gauge":
|
||||
metricType = dto.MetricType_GAUGE
|
||||
case "Histogram":
|
||||
metricType = dto.MetricType_HISTOGRAM
|
||||
case "Summary":
|
||||
metricType = dto.MetricType_SUMMARY
|
||||
}
|
||||
|
||||
return &dto.MetricFamily{
|
||||
Name: &m.Name,
|
||||
Help: &m.Help,
|
||||
Type: &metricType,
|
||||
}
|
||||
}
|
3
vendor/modules.txt
vendored
3
vendor/modules.txt
vendored
@ -309,6 +309,9 @@ github.com/kubernetes-csi/external-snapshotter/client/v6/clientset/versioned/typ
|
||||
# github.com/kubernetes-csi/lib-volume-populator v1.2.0
|
||||
## explicit; go 1.18
|
||||
github.com/kubernetes-csi/lib-volume-populator/populator-machinery
|
||||
# github.com/kubevirt/monitoring/pkg/metrics/parser v0.0.0-20230627123556-81a891d4462a
|
||||
## explicit; go 1.20
|
||||
github.com/kubevirt/monitoring/pkg/metrics/parser
|
||||
# github.com/mailru/easyjson v0.7.7
|
||||
## explicit; go 1.12
|
||||
github.com/mailru/easyjson/buffer
|
||||
|
Loading…
Reference in New Issue
Block a user