Conclusion: Strategic Synthesis

Duration: 5 minutes
Type: Discussion / Q&A

What You Accomplished

In this workshop, you:

Module Achievement Key Technology

Module 0

Understood the operator-driven architecture

OLM, ODF, IBM Fusion

Module 1

Validated storage classes and the RWO/RWX distinction

Ceph RBD, MCG, CephFS

Module 2

Deployed a VM and executed a live migration

OpenShift Virtualization, RWX Block

Module 3

Configured backup policy and ran an on-demand backup

CBT, rbd-diff, MCG

Module 4

Explored AI-driven data cataloging with MCP via OpenShift Lightspeed

DCS, MCP, OpenShift Lightspeed

Module 5

Explored the 4.21 Horizon — autonomic ops, AI inference, and GPU scheduling

Descheduler PSI, GIE, llm-d, CAS, DRA

The Core Message

OpenShift Data Foundation and IBM Storage Fusion transform storage from a passive receptacle into an intelligent, highly mobile data fabric ready for both legacy VMs and modern AI.

The five pillars you experienced today:

Mobility

RWX Ceph storage enables VMs to move freely across nodes — today manually, and now autonomically via the KubeVirtRelieveAndMigrate descheduler profile with PSI metrics. Cross-cluster live migration (Tech Preview in 4.21) extends this across entire clusters.

Protection

Change Block Tracking via the Ceph rbd-diff API achieves incremental-forever backups that bypass the CSI layer entirely, reducing backup windows from hours to minutes. Storage-agnostic CBT is now GA in 4.21.

Cataloging

IBM Fusion Data Cataloging Service with MCP enables AI-driven metadata discovery, classification, and governance through natural language — turning data catalogs from static repositories into intelligent, conversational assets.

Intelligence

Content-Aware Storage makes storage an active RAG pipeline participant. The Gateway API Inference Extension (GA since 4.20) and llm-d (CNCF Sandbox, March 2026) deliver GPU-efficient, cache-aware LLM serving.

GPU Efficiency

Dynamic Resource Allocation (GA in 4.21) replaces device plugins with expression-driven GPU scheduling, sharing, and topology awareness. Kueue 1.2 adds priority-based queue management for AI training jobs.

Next Steps

  • Explore further: Review the External References and the categorized links in Module 5 for deep-dive documentation

  • Try in production: Request a longer RHDP environment to test multi-VM backup policies with scheduled CBT

  • Plan for 4.21: Evaluate the KubeVirtRelieveAndMigrate descheduler, Gateway API Inference Extension, DRA, and Content-Aware Storage for your AI/ML infrastructure roadmap

  • Explore llm-d: The llm-d CNCF Sandbox project from IBM Research, Red Hat, and Google Cloud is the emerging standard for Kubernetes-native LLM inference

  • Provide feedback: Share your workshop experience with your facilitator

Q&A

This is the open discussion period. Facilitators should be prepared to address questions on:

  • Production sizing and capacity planning for ODF

  • IBM Fusion licensing and support tiers

  • Migration paths from legacy storage arrays

  • AI infrastructure planning with OpenShift

Thank You

Thank you for participating in the Modern Hybrid Infrastructure workshop. The skills you practiced today — VM deployment, live migration, intelligent data protection, and AI-driven data cataloging — are directly applicable to production OpenShift 4.21 environments.

Your lab environment will remain available for the duration of the RHDP reservation.