Module 5: Storage Configuration for OpenShift 4.18/4.19
Storage Overview for OpenShift 4.18/4.19
Storage Architecture in OpenShift
OpenShift Container Platform 4.18/4.19 provides comprehensive storage capabilities through multiple storage solutions:
-
OpenShift Data Foundation (ODF): Software-defined storage solution
-
Local Storage Operator: Local persistent volume management
-
External Storage Providers: Integration with existing storage systems
-
Dynamic Provisioning: Automated storage allocation
Storage Requirements for OpenShift 4.18/4.19
Before configuring storage, understand the requirements:
-
Container Registry: Persistent storage for image registry
-
Monitoring: Storage for Prometheus and Alertmanager
-
Logging: Storage for log aggregation
-
Application Workloads: Persistent volumes for stateful applications
-
etcd Backup: Storage for cluster state backups
OpenShift Data Foundation (ODF) Setup
What is OpenShift Data Foundation?https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html/storage/configuring-persistent-storage#persistent-storage-using-azure
OpenShift Data Foundation is Red Hat’s software-defined storage solution that provides:
-
Block Storage: High-performance block storage using Ceph RBD
-
File Storage: Shared file storage using CephFS
-
Object Storage: S3-compatible object storage using Ceph RADOS Gateway
-
Multi-Cloud Gateway: Unified object storage across multiple clouds
ODF Architecture Components
-
Ceph Storage Cluster: Distributed storage backend
-
Rook Operator: Kubernetes-native storage orchestration
-
NooBaa: Multi-cloud object gateway
-
CSI Drivers: Container Storage Interface drivers for dynamic provisioning
Prerequisites for ODF Installation
Before installing ODF, ensure:
-
Hardware Requirements: Minimum 3 worker nodes with local storage
-
Storage Devices: Raw block devices or local disks on each storage node
-
Network Requirements: High-bandwidth, low-latency network between storage nodes
-
Node Labels: Proper node labeling for storage node identification
Step-by-Step ODF Installation
Step 1: Install OpenShift Data Foundation Operator
-
Access the OpenShift web console
-
Navigate to Operators → OperatorHub
-
Search for "OpenShift Data Foundation"
-
Click Install and follow the installation wizard
-
Wait for the operator to be installed successfully
Step 2: Prepare Storage Nodes
Label nodes that will provide storage:
# Label storage nodes
oc label nodes worker-1 cluster.ocs.openshift.io/openshift-storage=""
oc label nodes worker-2 cluster.ocs.openshift.io/openshift-storage=""
oc label nodes worker-3 cluster.ocs.openshift.io/openshift-storage=""
Local Storage Operator Configuration
What is the Local Storage Operator?
The Local Storage Operator enables the use of local storage devices for persistent volumes in OpenShift 4.18/4.19.
Local Storage Use Cases
-
High-performance workloads: Applications requiring low-latency storage access
-
Database workloads: Databases that benefit from direct storage access
-
Edge computing: Environments where external storage is not available
-
Cost optimization: Utilizing existing local storage resources
Installing the Local Storage Operator
Step 1: Install the Operator
# Create namespace for local storage operator
oc create namespace openshift-local-storage
# Install the operator via OperatorHub or CLI
oc apply -f - <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: local-storage-operator
namespace: openshift-local-storage
spec:
channel: stable
name: local-storage-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
Step 2: Discover Local Storage Devices
# Create LocalVolumeDiscovery to discover available devices
oc apply -f - <<EOF
apiVersion: local.storage.openshift.io/v1alpha1
kind: LocalVolumeDiscovery
metadata:
name: auto-discover-devices
namespace: openshift-local-storage
spec:
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
EOF
Step 3: Create LocalVolumeSet
# Create LocalVolumeSet for automatic PV creation
oc apply -f - <<EOF
apiVersion: local.storage.openshift.io/v1alpha1
kind: LocalVolumeSet
metadata:
name: local-block
namespace: openshift-local-storage
spec:
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
storageClassName: local-block
volumeMode: Block
fsType: ext4
maxDeviceCount: 10
deviceInclusionSpec:
deviceTypes:
- disk
- part
minSize: 100Gi
EOF
External Storage Integration
Supported External Storage Types
OpenShift 4.18/4.19 supports various external storage systems:
Block Storage Providers
-
iSCSI: Internet Small Computer Systems Interface
-
Fibre Channel: High-speed network technology
-
AWS EBS: Amazon Elastic Block Store
-
Azure Disk: Microsoft Azure managed disks
-
GCE Persistent Disk: Google Cloud persistent disks
File Storage Providers
-
NFS: Network File System
-
CephFS: Ceph File System
-
AWS EFS: Amazon Elastic File System
-
Azure Files: Microsoft Azure file shares
Object Storage Providers
-
S3-compatible storage: Amazon S3 and compatible systems
-
OpenStack Swift: OpenStack object storage
-
Ceph RADOS Gateway: Ceph object storage interface
Configuring External Storage
Creating Storage Classes
# Example NFS storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
provisioner: nfs.csi.k8s.io
parameters:
server: nfs-server.example.com
share: /exports/nfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
Persistent Volume Management
-
Static Provisioning: Manually created persistent volumes
-
Dynamic Provisioning: Automatically created volumes via storage classes
-
Volume Expansion: Expanding existing persistent volumes
Container Image Registry Configuration
Configuring the Internal Registry
The OpenShift internal registry requires persistent storage for production use:
Configure Registry Storage with ODF
# Configure image registry to use ODF storage
oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"pvc":{"claim":""}}}}'
# Verify registry configuration
oc get configs.imageregistry.operator.openshift.io cluster -o yaml
Registry Security and Access Control
-
Registry Security: Configure TLS and authentication
-
Access Control: Manage registry access permissions
-
Storage Configuration: Optimize storage for registry workloads
Volume Snapshots and Backup
Configuring Volume Snapshots
OpenShift 4.18/4.19 provides comprehensive snapshot capabilities:
Install Volume Snapshot Components
# Volume snapshot components are included by default
# Verify snapshot CRDs are available
oc get crd | grep snapshot
# Check volume snapshot controller
oc get pods -n openshift-cluster-storage-operator
Backup and Disaster Recovery
Implement comprehensive backup strategies:
-
OADP (OpenShift API for Data Protection): Application backup and restore
-
etcd Backup: Control plane backup procedures
-
Volume Snapshots: Point-in-time storage snapshots
-
Cross-Region Replication: Disaster recovery across regions
Storage Monitoring and Performance
Storage Metrics and Monitoring
Monitor storage performance and capacity:
Key Storage Metrics
-
Capacity Utilization: Available vs. used storage capacity
-
IOPS Performance: Input/output operations per second
-
Latency Metrics: Storage response times
-
Throughput: Data transfer rates
Monitoring Tools
-
Prometheus Monitoring: Built-in metrics collection
-
Grafana Dashboards: Visual storage performance dashboards
-
ODF Monitoring: Specialized monitoring for OpenShift Data Foundation
-
Storage Alerts: Automated alerting for storage issues
Performance Optimization
Optimize storage performance for different workloads:
-
Storage Class Parameters: Tune storage class settings for performance
-
Node Affinity: Place storage-intensive workloads on appropriate nodes
-
Resource Limits: Configure appropriate CPU and memory limits
-
Network Optimization: Optimize network configuration for storage traffic
Storage Best Practices for OpenShift 4.18/4.19
Design Principles
-
Redundancy: Implement storage redundancy across failure domains
-
Performance: Choose appropriate storage types for workload requirements
-
Scalability: Plan for storage growth and expansion
-
Security: Implement encryption at rest and in transit
Operational Best Practices
-
Capacity Planning: Monitor and plan for storage capacity growth
-
Backup Strategy: Implement regular backup and disaster recovery procedures
-
Performance Monitoring: Continuously monitor storage performance metrics
-
Security Updates: Keep storage components updated with security patches
Next Steps
Ready to configure advanced networking with Nmstate? Continue to Module 6: Network Configuration using Nmstate.