Conclusion: Strategic Synthesis
What You Accomplished
In this workshop, you:
| Module | Achievement | Key Technology |
|---|---|---|
Understood the operator-driven architecture |
OLM, ODF, IBM Fusion |
|
Validated storage classes and the RWO/RWX distinction |
Ceph RBD, MCG, CephFS |
|
Deployed a VM and executed a live migration |
OpenShift Virtualization, RWX Block |
|
Configured backup policy and ran an on-demand backup |
CBT, rbd-diff, MCG |
|
Explored AI-driven data cataloging with MCP via OpenShift Lightspeed |
DCS, MCP, OpenShift Lightspeed |
|
Explored the 4.21 Horizon — autonomic ops, AI inference, and GPU scheduling |
Descheduler PSI, GIE, llm-d, CAS, DRA |
The Core Message
OpenShift Data Foundation and IBM Storage Fusion transform storage from a passive receptacle into an intelligent, highly mobile data fabric ready for both legacy VMs and modern AI.
The five pillars you experienced today:
- Mobility
-
RWX Ceph storage enables VMs to move freely across nodes — today manually, and now autonomically via the
KubeVirtRelieveAndMigratedescheduler profile with PSI metrics. Cross-cluster live migration (Tech Preview in 4.21) extends this across entire clusters. - Protection
-
Change Block Tracking via the Ceph rbd-diff API achieves incremental-forever backups that bypass the CSI layer entirely, reducing backup windows from hours to minutes. Storage-agnostic CBT is now GA in 4.21.
- Cataloging
-
IBM Fusion Data Cataloging Service with MCP enables AI-driven metadata discovery, classification, and governance through natural language — turning data catalogs from static repositories into intelligent, conversational assets.
- Intelligence
-
Content-Aware Storage makes storage an active RAG pipeline participant. The Gateway API Inference Extension (GA since 4.20) and llm-d (CNCF Sandbox, March 2026) deliver GPU-efficient, cache-aware LLM serving.
- GPU Efficiency
-
Dynamic Resource Allocation (GA in 4.21) replaces device plugins with expression-driven GPU scheduling, sharing, and topology awareness. Kueue 1.2 adds priority-based queue management for AI training jobs.
Next Steps
-
Explore further: Review the External References and the categorized links in Module 5 for deep-dive documentation
-
Try in production: Request a longer RHDP environment to test multi-VM backup policies with scheduled CBT
-
Plan for 4.21: Evaluate the
KubeVirtRelieveAndMigratedescheduler, Gateway API Inference Extension, DRA, and Content-Aware Storage for your AI/ML infrastructure roadmap -
Explore llm-d: The llm-d CNCF Sandbox project from IBM Research, Red Hat, and Google Cloud is the emerging standard for Kubernetes-native LLM inference
-
Provide feedback: Share your workshop experience with your facilitator
Q&A
|
This is the open discussion period. Facilitators should be prepared to address questions on:
|
Thank You
Thank you for participating in the Modern Hybrid Infrastructure workshop. The skills you practiced today — VM deployment, live migration, intelligent data protection, and AI-driven data cataloging — are directly applicable to production OpenShift 4.21 environments.
Your lab environment will remain available for the duration of the RHDP reservation.