mirror of
https://github.com/kubevirt/containerized-data-importer.git
synced 2025-06-03 06:30:22 +00:00
Fix typos in doc/datavolumes.md (#1621)
Signed-off-by: Issac Besso <ibesso@redhat.com>
This commit is contained in:
parent
f7703a6bd7
commit
a6c9f199a0
@ -3,7 +3,7 @@
|
||||
## Introduction
|
||||
Data Volumes(DV) are an abstraction on top of Persistent Volume Claims(PVC) and the Containerized Data Importer(CDI). The DV will monitor and orchestrate the upload/import of the data into the PVC. Once the process is completed, the DV will be in a consistent state that allow consumers to make certain assumptions about the DV in order to progress their own orchestration.
|
||||
|
||||
Why is this an improvement over simply looking at the state annotation created and managed by CDI? Data Volumes provide a versioned API that other project like [Kubevirt](https://github.com/kubevirt/kubevirt) can integrate with. This way those project can rely on an API staying the same for a particular version and have guarantees about what that API will look like. Any changes to the API will result in a new version of the API.
|
||||
Why is this an improvement over simply looking at the state annotation created and managed by CDI? Data Volumes provide a versioned API that other projects like [Kubevirt](https://github.com/kubevirt/kubevirt) can integrate with. This way those projects can rely on an API staying the same for a particular version and have guarantees about what that API will look like. Any changes to the API will result in a new version of the API.
|
||||
|
||||
### Status phases
|
||||
The following statuses are possible.
|
||||
@ -187,7 +187,7 @@ spec:
|
||||
### Multi-stage Import
|
||||
The VDDK source is currently the only type of DataVolume that can perform a multi-stage import. In a multi-stage import, multiple pods are started in succession to copy different parts of the source to an existing base disk image. The VDDK source uses a multi-stage import to perform warm migration: after copying an initial disk image, it queries the VMware host for the blocks that changed in between two snapshots. Each delta is applied to the disk image, and only the final delta copy needs the source VM to be powered off, minimizing downtime.
|
||||
|
||||
To create a multi-stage VDDK import, first [enable changed block tracking](https://kb.vmware.com/s/article/1031873) on the source VM. Take an initial snapshot of the VM (snapshot-1), and take another snapshot (snapshot-2) after the VM has run long enough to save more data to disk. Create a DataVolume spec similar to the example below, specifying a list of checkpoints and a finalCheckpoint boolean to indicate if there are no further snapshots to copy. The first importer pod to appear will copy the full disk contents of snapshot-1 to the disk image provided by the PVC, and the second importer pod will quickly copy only the blocks that changed between snapshot-1 and snapshot-2. If finalCheckpoint is set to false, the resulting DataVolume will wait in a "Paused" state until further checkpoints are provided. The DataVolume will only move to "Succeeded" when finalCheckpoint is true and the last checkpoint in the list has been copied. It is not necessary to provide all the checkpoints up-front, because updates are allowed to be applied to these fields (finalChcekpoint and checkpoints).
|
||||
To create a multi-stage VDDK import, first [enable changed block tracking](https://kb.vmware.com/s/article/1031873) on the source VM. Take an initial snapshot of the VM (snapshot-1), and take another snapshot (snapshot-2) after the VM has run long enough to save more data to disk. Create a DataVolume spec similar to the example below, specifying a list of checkpoints and a finalCheckpoint boolean to indicate if there are no further snapshots to copy. The first importer pod to appear will copy the full disk contents of snapshot-1 to the disk image provided by the PVC, and the second importer pod will quickly copy only the blocks that changed between snapshot-1 and snapshot-2. If finalCheckpoint is set to false, the resulting DataVolume will wait in a "Paused" state until further checkpoints are provided. The DataVolume will only move to "Succeeded" when finalCheckpoint is true and the last checkpoint in the list has been copied. It is not necessary to provide all the checkpoints up-front, because updates are allowed to be applied to these fields (finalCheckpoint and checkpoints).
|
||||
|
||||
```yaml
|
||||
apiVersion: cdi.kubevirt.io/v1beta1
|
||||
@ -219,7 +219,7 @@ spec:
|
||||
## Block Volume Mode
|
||||
You can import, clone and upload a disk image to a raw block persistent volume.
|
||||
This is done by assigning the value 'Block' to the PVC volumeMode field in the DataVolume yaml.
|
||||
The following is an exmaple to import disk image to a raw block volume:
|
||||
The following is an example to import disk image to a raw block volume:
|
||||
```yaml
|
||||
apiVersion: cdi.kubevirt.io/v1beta1
|
||||
kind: DataVolume
|
||||
@ -247,12 +247,12 @@ The DataVolume status object has conditions. There are 3 conditions available fo
|
||||
* Running
|
||||
|
||||
The running and ready conditions are mutually exclusive, if running is true, then ready cannot be true and vice versa. Each condition has the following fields:
|
||||
* Type (Ready/Bound/Running)
|
||||
* Status (True/False)
|
||||
* LastTransitionTime The timestamp when the last transition happened.
|
||||
* LastHeartbeatTime the timestamp the last time anything on the condition was updated.
|
||||
* Reason The reason the status transitioned to a new value, this is a camel cased single word, similar to an EventReason in events.
|
||||
* Message A detailed messages expanding on the reason of the transition. For instance if Running went from True to False, the reason will be the container exit reason, and the message will be the container exit message, which explains why the container exitted.
|
||||
* Type (Ready/Bound/Running).
|
||||
* Status (True/False).
|
||||
* LastTransitionTime - the timestamp when the last transition happened.
|
||||
* LastHeartbeatTime - the timestamp the last time anything on the condition was updated.
|
||||
* Reason - the reason the status transitioned to a new value, this is a camel cased single word, similar to an EventReason in events.
|
||||
* Message - a detailed messages expanding on the reason of the transition. For instance if Running went from True to False, the reason will be the container exit reason, and the message will be the container exit message, which explains why the container exited.
|
||||
|
||||
## Annotations
|
||||
Specific [DV annotations](datavolume-annotations.md) are passed to the transfer pods to control their behavior.
|
||||
@ -260,13 +260,13 @@ Specific [DV annotations](datavolume-annotations.md) are passed to the transfer
|
||||
## Kubevirt integration
|
||||
[Kubevirt](https://github.com/kubevirt/kubevirt) is an extension to Kubernetes that allows one to run Virtual Machines(VM) on the same infra structure as the containers managed by Kubernetes. CDI provides a mechanism to get a disk image into a PVC in order for Kubevirt to consume it. The following steps have to be taken in order for Kubevirt to consume a CDI provided disk image.
|
||||
1. Create a PVC with an annotation to for instance import from an external URL.
|
||||
2. An importer pod is start that attempts to get the image from the external source.
|
||||
2. An importer pod is started, that attempts to get the image from the external source.
|
||||
3. Create a VM definition that references the PVC we just created.
|
||||
4. Wait for the importer pod to finish (status can be checked by the status annotation on the PVC).
|
||||
5. Start the VMs using the imported disk.
|
||||
There is no mechanism to stop 5 from happening before the import is complete, so once can attempt to start the VM before the disk has been completely imported, with obvious bad results.
|
||||
There is no mechanism to stop 5 from happening before the import is complete, so one can attempt to start the VM before the disk has been completely imported, with obvious bad results.
|
||||
|
||||
Now lets do the same process but using DVs.
|
||||
Now let's do the same process but using DVs.
|
||||
1. Create a VM definition that references a DV template, which includes the external URL that contains the disk image.
|
||||
2. A DV is created from the template that in turn creates an underlying PVC with the correct annotation.
|
||||
3. The importer pod is created like before.
|
||||
@ -324,7 +324,7 @@ spec:
|
||||
[Get example](../manifests/example/vm-dv.yaml)
|
||||
|
||||
This example combines all the different pieces into a single yaml.
|
||||
* Creation of a VM definition (example-vm)
|
||||
* Creation of a DV with a source of http which points to an external URL (example-dv)
|
||||
* Creation of a VM definition (example-vm).
|
||||
* Creation of a DV with a source of http which points to an external URL (example-dv).
|
||||
* Creation of a matching PVC with the same name as the DV, which will contain the result of the import (example-dv).
|
||||
* Creation of an importer pod that does the actual import work.
|
||||
|
Loading…
Reference in New Issue
Block a user