Module 2: OpenShift Virtualization & RWX Live Migration

Duration: 30 minutes
Type: Hands-on Lab

Learning Objectives

By the end of this module, you will:

  • Deploy a Linux Virtual Machine using OpenShift Virtualization

  • Execute and monitor a non-disruptive live migration

  • Understand why RWX storage is required for workload mobility

Step 1: Navigate to Virtualization

  1. Open the OpenShift Console: {openshift_console_url}

  2. Select your project: {user_namespace}

  3. From the left menu, navigate to Virtualization > VirtualMachines

Step 2: Create a Virtual Machine

  1. Click Create VirtualMachine

  2. Select From template

  3. Choose Fedora VM (or the available Linux template)

  4. Configure the VM:

    Name

    {user}-testvm

    Namespace

    {user_namespace}

  5. Click Customize VirtualMachine before creating

  6. Under Storage, verify or change the disk settings:

    • Storage Class: ocs-storagecluster-ceph-rbd-virtualization

    • Access Mode: Should show ReadWriteMany (RWX)

    • Volume Mode: Block

  7. Click Create VirtualMachine

The storage class ocs-storagecluster-ceph-rbd-virtualization provides RWX Block volumes backed by Ceph RBD. This is what makes live migration possible — the VM’s disk can be accessed from multiple nodes simultaneously.

Alternatively, create the VM from the terminal:

cat <<EOF | oc apply -f -
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: {user}-testvm
  namespace: {user_namespace}
  labels:
    app: {user}-testvm
spec:
  runStrategy: Always
  template:
    metadata:
      labels:
        kubevirt.io/vm: {user}-testvm
    spec:
      domain:
        devices:
          disks:
            - disk:
                bus: virtio
              name: rootdisk
          interfaces:
            - masquerade: {}
              name: default
        resources:
          requests:
            memory: 1Gi
      networks:
        - name: default
          pod: {}
      volumes:
        - dataVolume:
            name: {user}-testvm-rootdisk
          name: rootdisk
  dataVolumeTemplates:
    - metadata:
        name: {user}-testvm-rootdisk
      spec:
        source:
          registry:
            url: "docker://quay.io/containerdisks/fedora:latest"
        pvc:
          accessModes:
            - ReadWriteMany
          volumeMode: Block
          resources:
            requests:
              storage: 10Gi
          storageClassName: ocs-storagecluster-ceph-rbd-virtualization
EOF

Step 3: Wait for the VM to Start

Monitor the VM status:

oc get vm {user}-testvm -n {user_namespace} -w

Wait until READY shows True and STATUS shows Running. This may take 2-3 minutes as the container disk image is imported.

Check which node the VM is running on:

oc get vmi {user}-testvm -n {user_namespace} -o jsonpath='{.status.nodeName}' ; echo

Record this node name — you’ll verify the VM moves to a different node after migration.

VM Running Overview
Figure 1. The VM overview page shows Running status, the assigned node, and a VNC console preview.

Step 4: Initiate Live Migration

Via the Console

  1. In the VirtualMachines list, click on {user}-testvm

  2. Click Actions > Migration > Compute to migrate the VM to a different node

  3. Confirm the migration

Actions Migrate Menu
Figure 2. Select Actions > Migration > Compute to initiate a live migration.

Via the Terminal

cat <<EOF | oc apply -f -
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
  name: {user}-testvm-migration
  namespace: {user_namespace}
spec:
  vmiName: {user}-testvm
EOF

Step 5: Monitor the Migration

Watch the migration progress:

oc get vmim -n {user_namespace} -w

The migration goes through these phases: SchedulingTargetReadyRunningSucceeded.

VM Migrating Status
Figure 3. While migrating, the console shows a "Migrating" status badge. The VNC console stays connected throughout.

Once complete, verify the VM moved to a different node:

oc get vmi {user}-testvm -n {user_namespace} -o jsonpath='{.status.nodeName}' ; echo

The VM changed hosts without downtime. The guest operating system continued running throughout. This is only possible because:

  1. The PVC uses ocs-storagecluster-ceph-rbd-virtualization with RWX Block access mode

  2. Both the source and target nodes can mount the same Ceph RBD volume simultaneously

  3. KubeVirt copies the VM memory state over an encrypted TLS connection

Attempting this with a standard RWO volume would fail — the VM would be permanently locked to a single physical node.

Clean Up (Optional)

If you want to clean up before Module 3:

oc delete vmim {user}-testvm-migration -n {user_namespace} --ignore-not-found

Do not delete the VM itself — you will use it in Module 3 for data protection.

References

Facilitator Notes: While the VM spins up and migrates, explain the mechanics: TLS encryption for migration traffic, parallel migration limits, and cluster capacity planning. Emphasize that a standard RWO volume would fail this test, permanently locking the VM to one node.