Module 7: Optional Features for OpenShift 4.18/4.19
OpenShift Virtualization
What is OpenShift Virtualization?
OpenShift Virtualization enables you to run and manage virtual machines alongside containers in OpenShift Container Platform 4.18/4.19, providing a unified platform for both traditional and cloud-native workloads.
Key Features and Capabilities
-
VM Lifecycle Management: Create, manage, and delete virtual machines
-
Live Migration: Migrate running VMs between nodes without downtime
-
Storage Integration: Persistent storage for VM disks
-
Network Configuration: Advanced networking options including SR-IOV
-
Template Management: VM templates for standardized deployments
-
GPU Passthrough: Direct GPU access for VMs
Prerequisites for OpenShift Virtualization
Before installing OpenShift Virtualization, ensure:
-
Hardware Requirements: Nodes with hardware virtualization support
-
CPU Features: Intel VT-x or AMD-V virtualization extensions
-
Memory: Sufficient RAM for both containers and virtual machines
-
Storage: ReadWriteMany (RWX) storage for live migration
-
Network: Proper network configuration for VM connectivity
Installing OpenShift Virtualization
Step 1: Install the OpenShift Virtualization Operator
-
Access the OpenShift web console
-
Navigate to Operators → OperatorHub
-
Search for "OpenShift Virtualization"
-
Click Install and follow the installation wizard
-
Wait for the operator to be installed successfully
Step 2: Create HyperConverged Custom Resource
# Create HyperConverged instance to enable virtualization
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
annotations:
deployOVS: 'false'
namespace: openshift-cnv
spec:
virtualMachineOptions:
disableFreePageReporting: false
disableSerialConsoleLog: true
higherWorkloadDensity:
memoryOvercommitPercentage: 100
liveMigrationConfig:
allowAutoConverge: false
allowPostCopy: false
completionTimeoutPerGiB: 150
parallelMigrationsPerCluster: 5
parallelOutboundMigrationsPerNode: 2
progressTimeout: 150
certConfig:
ca:
duration: 48h0m0s
renewBefore: 24h0m0s
server:
duration: 24h0m0s
renewBefore: 12h0m0s
applicationAwareConfig:
allowApplicationAwareClusterResourceQuota: false
vmiCalcConfigName: DedicatedVirtualResources
featureGates:
enableCommonBootImageImport: true
downwardMetrics: false
disableMDevConfiguration: false
enableApplicationAwareQuota: false
deployKubeSecondaryDNS: false
alignCPUs: false
deployVmConsoleProxy: false
persistentReservation: false
autoResourceLimits: false
workloadUpdateStrategy:
batchEvictionInterval: 1m0s
batchEvictionSize: 10
workloadUpdateMethods:
- LiveMigrate
uninstallStrategy: BlockUninstallIfWorkloadsExist
resourceRequirements:
vmiCPUAllocationRatio: 10
Step 3: Verify Installation
# Check virtualization pods
oc get pods -n openshift-cnv
# Verify HyperConverged status
oc get hco -n openshift-cnv kubevirt-hyperconverged -o yaml
# Check node virtualization capabilities
oc get nodes -o custom-columns=NAME:.metadata.name,VIRT:.status.allocatable.devices\\.kubevirt\\.io/kvm
Creating and Managing Virtual Machines
Creating a VM from Template
# Example VM configuration
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: rhel8-vm
namespace: default
spec:
running: true
template:
metadata:
labels:
kubevirt.io/vm: rhel8-vm
spec:
domain:
cpu:
cores: 2
memory:
guest: 4Gi
devices:
disks:
- name: rootdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
machine:
type: pc-q35-rhel8.4.0
networks:
- name: default
pod: {}
volumes:
- name: rootdisk
containerDisk:
image: quay.io/containerdisks/rhel:8-latest
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#cloud-config
user: cloud-user
password: redhat
chpasswd: { expire: False }
Use Cases for OpenShift Virtualization
-
Legacy Application Migration: Migrate existing VMs to OpenShift
-
Mixed Workload Environments: Run VMs and containers on the same infrastructure
-
Development and Testing: Provide isolated VM environments for development
-
Application Modernization: Gradual migration from VMs to containers
-
Disaster Recovery: VM-based backup and recovery solutions
VMware Migration with VDDK
For organizations migrating from VMware vSphere to OpenShift Virtualization, the VMware Virtual Disk Development Kit (VDDK) provides optimized disk transfer capabilities.
What is VDDK?
The VMware Virtual Disk Development Kit (VDDK) is a collection of C libraries, code samples, utilities, and documentation to help you create applications that access VMware virtual disk files. For OpenShift Virtualization, VDDK enables faster and more efficient migration of VMware VMs.
Benefits of Using VDDK
-
Faster Migration: Direct access to VMware virtual disks
-
Reduced Network Load: Optimized data transfer protocols
-
Better Performance: Native VMware disk format support
-
Enterprise Migration: Suitable for large-scale VM migrations
Prerequisites for VDDK Migration
Before setting up VDDK, ensure you have:
-
Access to an OpenShift internal image registry or secure external registry
-
VMware vSphere environment with appropriate VDDK version
-
Migration Toolkit for Virtualization (MTV) installed
-
Sufficient storage for VM migration
Creating and Using a VDDK Image
Step 1: Download and Prepare VDDK
# Create and navigate to a temporary directory
mkdir /tmp/vddk-setup && cd /tmp/vddk-setup
# Download VDDK from VMware
# Navigate to: https://code.vmware.com/web/sdk
# Under Compute Virtualization, click Virtual Disk Development Kit (VDDK)
# Select VDDK version matching your vSphere version (e.g., VDDK 7.0 for vSphere 7.0)
# Download and save the VDDK archive
# Extract the VDDK archive
tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
Step 2: Build VDDK Container Image
# Create Dockerfile for VDDK image
cat > Dockerfile <<EOF
FROM registry.access.redhat.com/ubi8/ubi-minimal
USER 1001
COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
RUN mkdir -p /opt
ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
EOF
# Create new project for the VDDK image
oc new-project vddk
# Create new image stream for the image
oc new-build . --name vddk
# Start the build
oc start-build vddk --from-dir .
Step 3: Verify and Configure VDDK Image
# Verify the build completed successfully
oc get build
# Expected output:
# NAME TYPE FROM STATUS STARTED DURATION
# vddk-1 Docker Binary Complete 5 minutes ago 45s
# Ensure image is accessible to migration namespace
# Replace openshift-mtv with your migration namespace if different
oc policy add-role-to-user system:image-puller \
system:serviceaccount:openshift-mtv:default --namespace vddk
# Edit the v2v-vmware ConfigMap in the openshift-cnv project
oc edit configmap v2v-vmware -n openshift-cnv
Step 4: Update Migration Configuration
Add the VDDK image to the v2v-vmware ConfigMap:
# Add this to the data stanza of the v2v-vmware ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: v2v-vmware
namespace: openshift-cnv
data:
vddk-init-image: image-registry.openshift-image-registry.svc:5000/vddk/vddk
# ... other configuration parameters
Step 5: Verify VDDK Configuration
# Verify the ConfigMap has been updated
oc get configmap v2v-vmware -n openshift-cnv -o yaml
# Check that the vddk-init-image parameter is present
oc get configmap v2v-vmware -n openshift-cnv -o jsonpath='{.data.vddk-init-image}'
# Restart migration pods to pick up the new configuration
oc delete pods -l app=migration-controller -n openshift-mtv
Using VDDK for VM Migration
Once VDDK is configured, your VMware to OpenShift migrations will automatically use the optimized VDDK libraries:
# Create a migration plan that will use VDDK
# This is typically done through the Migration Toolkit for Virtualization UI
# The migration will automatically detect and use the VDDK image
# Monitor migration progress
oc get migration -n openshift-mtv
# Check migration logs for VDDK usage
oc logs -l app=migration-controller -n openshift-mtv | grep -i vddk
✅ Verification Checkpoint: Confirm VDDK is properly configured and being used for VMware migrations.
Advanced Virtualization Features
Live Migration Configuration
Configure live migration for high availability:
# Configure live migration policy
apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
name: kubevirt
namespace: openshift-cnv
spec:
configuration:
migrations:
allowAutoConverge: true
allowPostCopy: true
completionTimeoutPerGiB: 800
parallelMigrationsPerCluster: 5
parallelOutboundMigrationsPerNode: 2
progressTimeout: 150
GPU Passthrough Configuration
Enable GPU passthrough for VMs:
# Label nodes with GPU resources
oc label node worker-gpu-1 nvidia.com/gpu.present=true
# Configure GPU passthrough
oc patch hco kubevirt-hyperconverged -n openshift-cnv --type=merge --patch='
{
"spec": {
"permittedHostDevices": {
"pciHostDevices": [
{
"pciDeviceSelector": "10DE:1EB8",
"resourceName": "nvidia.com/TU104GL_Tesla_T4"
}
]
}
}
}'
VM Migration from VMware
For migrating VMs from VMware vSphere, use the Migration Toolkit for Virtualization:
# Install Migration Toolkit for Virtualization
oc apply -f - <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: mtv-operator
namespace: openshift-mtv
spec:
channel: release-v2.6
name: mtv-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
Creating VM Templates
# Example VM template
apiVersion: template.openshift.io/v1
kind: Template
metadata:
name: rhel8-server-template
namespace: openshift
objects:
- apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: ${NAME}
spec:
running: false
template:
metadata:
labels:
kubevirt.io/vm: ${NAME}
spec:
domain:
cpu:
cores: ${{CPU_CORES}}
memory:
guest: ${MEMORY}
devices:
disks:
- name: rootdisk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
networks:
- name: default
pod: {}
volumes:
- name: rootdisk
containerDisk:
image: ${DISK_IMAGE}
parameters:
- name: NAME
description: VM name
required: true
- name: CPU_CORES
description: Number of CPU cores
value: "2"
- name: MEMORY
description: Memory size
value: "4Gi"
- name: DISK_IMAGE
description: Container disk image
value: "quay.io/containerdisks/rhel:8-latest"
OpenShift AI (Red Hat OpenShift AI)
What is OpenShift AI?
Red Hat OpenShift AI provides a comprehensive platform for developing, training, serving, and monitoring AI/ML models on OpenShift Container Platform 4.18/4.19.
Key Features and Capabilities
-
Model Development: Jupyter notebooks and development environments
-
Model Training: Distributed training with Kubeflow Pipelines
-
Model Serving: High-performance model inference serving
-
MLOps Integration: End-to-end ML pipeline automation
-
GPU Support: NVIDIA GPU acceleration for training and inference
-
Distributed Training: Support for multi-node, multi-GPU training workloads
Prerequisites for OpenShift AI
Before installing OpenShift AI, ensure:
-
Cluster Resources: Sufficient CPU, memory, and storage for AI workloads
-
GPU Support: NVIDIA GPU Operator for GPU acceleration
-
Storage: High-performance storage for datasets and model artifacts
-
Network: High-bandwidth networking for distributed training
Installing OpenShift AI
Step 1: Install Required Operators
# Install OpenShift AI Operator
oc apply -f - <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: rhods-operator
namespace: redhat-ods-operator
spec:
channel: stable
name: rhods-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
Step 2: Create DataScienceCluster
# Create DataScienceCluster instance
apiVersion: datasciencecluster.opendatahub.io/v1
kind: DataScienceCluster
metadata:
name: default-dsc
spec:
components:
codeflare:
managementState: Managed
dashboard:
managementState: Managed
datasciencepipelines:
managementState: Managed
kserve:
managementState: Managed
modelmeshserving:
managementState: Managed
ray:
managementState: Managed
workbenches:
managementState: Managed
Step 3: Configure GPU Support (Optional)
# Install NVIDIA GPU Operator
oc apply -f - <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: gpu-operator-certified
namespace: nvidia-gpu-operator
spec:
channel: stable
name: gpu-operator-certified
source: certified-operators
sourceNamespace: openshift-marketplace
EOF
AI/ML Workload Examples
Creating a Data Science Project
# Create a data science project
apiVersion: v1
kind: Namespace
metadata:
name: ml-project
labels:
opendatahub.io/dashboard: "true"
annotations:
openshift.io/description: "Machine Learning Project"
openshift.io/display-name: "ML Project"
Jupyter Notebook Deployment
# Deploy Jupyter notebook server
apiVersion: kubeflow.org/v1
kind: Notebook
metadata:
name: ml-notebook
namespace: ml-project
spec:
template:
spec:
containers:
- name: ml-notebook
image: quay.io/opendatahub/workbench-images:jupyter-datascience-c9s-py311_2023c_latest
resources:
requests:
cpu: "1"
memory: "4Gi"
nvidia.com/gpu: "1"
limits:
cpu: "2"
memory: "8Gi"
nvidia.com/gpu: "1"
volumeMounts:
- name: workspace
mountPath: /opt/app-root/src
volumes:
- name: workspace
persistentVolumeClaim:
claimName: ml-workspace-pvc
Model Serving with KServe
# Deploy a model using KServe
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: sklearn-iris
namespace: ml-project
spec:
predictor:
sklearn:
storageUri: "gs://kfserving-examples/models/sklearn/1.0/model"
resources:
requests:
cpu: "100m"
memory: "256Mi"
limits:
cpu: "1"
memory: "1Gi"
Use Cases for OpenShift AI
-
Machine Learning Workflows: End-to-end ML pipeline development and deployment
-
Deep Learning Applications: GPU-accelerated training for neural networks
-
Model Deployment: Scalable model serving and inference
-
AI Infrastructure: Centralized platform for AI/ML teams
-
Research and Development: Collaborative AI research environments
-
Edge AI: Deploy AI models at edge locations
Additional Optional Features
OpenShift Serverless (Knative)
OpenShift Serverless provides serverless computing capabilities:
-
Knative Serving: Auto-scaling serverless applications
-
Knative Eventing: Event-driven architectures
-
Function-as-a-Service: Serverless function deployment
OpenShift Service Mesh (Istio)
OpenShift Service Mesh provides advanced traffic management:
-
Traffic Management: Advanced routing and load balancing
-
Security: mTLS and security policies
-
Observability: Distributed tracing and metrics
OpenShift GitOps (ArgoCD)
OpenShift GitOps enables GitOps-based deployments:
-
Declarative Configuration: Git-based configuration management
-
Automated Deployments: Continuous deployment from Git repositories
-
Multi-Cluster Management: Manage multiple clusters from a single interface
OpenShift Pipelines (Tekton)
OpenShift Pipelines provides cloud-native CI/CD:
-
Cloud-Native Pipelines: Kubernetes-native CI/CD pipelines
-
Reusable Tasks: Modular pipeline components
-
Integration: Integration with Git, registries, and deployment targets
Best Practices for Optional Features
Resource Management
-
Resource Allocation: Plan CPU, memory, and storage requirements for optional features
-
Node Placement: Use node selectors and affinity rules for optimal placement
-
Resource Limits: Set appropriate resource limits and requests
-
Monitoring: Implement comprehensive monitoring for all optional features
Performance Optimization
-
GPU Utilization: Optimize GPU usage for AI/ML and virtualization workloads
-
Storage Performance: Use high-performance storage for data-intensive workloads
-
Network Optimization: Configure SR-IOV and high-speed networking for performance-critical applications
-
Scaling Strategies: Implement appropriate scaling strategies for different workload types
Security Considerations
-
Network Policies: Implement network policies for workload isolation
-
RBAC: Configure role-based access control for optional features
-
Image Security: Use trusted container images and implement image scanning
-
Compliance: Ensure compliance with organizational security policies
Documentation References
Workshop Conclusion
Congratulations! You’ve completed the OpenShift 4.18/4.19 Bare Metal Deployment Workshop. You now have comprehensive knowledge to:
Core Competencies Achieved
-
Deploy OpenShift 4.18/4.19 on bare metal infrastructure using multiple installation methods
-
Configure Advanced Networking with OVN-Kubernetes, Nmstate, and SR-IOV
-
Implement Storage Solutions with OpenShift Data Foundation and Local Storage Operator
-
Manage Cluster Operations including monitoring, logging, and maintenance
-
Deploy Optional Features such as OpenShift Virtualization and OpenShift AI
Next Steps and Continued Learning
-
Production Deployment: Apply the knowledge to deploy production OpenShift clusters
-
Advanced Features: Explore additional OpenShift capabilities and operators
-
Certification: Consider pursuing Red Hat OpenShift certifications
-
Community Engagement: Participate in OpenShift community forums and events