Module 6: Advanced Network Configuration for OpenShift 4.18/4.19
Networking Overview for OpenShift 4.18/4.19
OpenShift Networking Architecture
OpenShift Container Platform 4.18/4.19 provides advanced networking capabilities through:
-
OVN-Kubernetes: Default Container Network Interface (CNI) plugin
-
Nmstate Operator: Declarative network configuration management
-
SR-IOV Network Operator: High-performance networking
-
Multus CNI: Multiple network interfaces per pod
Network Components and Features
-
Software-Defined Networking: Overlay networks with OVN-Kubernetes
-
Network Policies: Microsegmentation and traffic control
-
Service Mesh: Advanced traffic management
-
Load Balancing: Built-in load balancing and ingress capabilities
-
IPsec Encryption: Pod-to-pod encryption
Nmstate Operator Configuration
What is the Nmstate Operator?
The Nmstate Operator provides declarative network configuration management for OpenShift nodes, enabling:
-
Declarative Configuration: YAML-based network interface configuration
-
Node Network Management: Centralized network configuration across cluster nodes
-
Advanced Networking: Support for bonds, VLANs, bridges, and complex topologies
-
State Validation: Automatic validation and rollback of network configurations
Installing the Nmstate Operator
Advanced Network Interface Configuration
Configuring Network Bonds
# Example: Configure network bonding with LACP
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: bond-policy
spec:
desiredState:
interfaces:
- name: bond0
type: bond
state: up
link-aggregation:
mode: 802.3ad
slaves:
- enp1s0
- enp2s0
options:
miimon: "100"
ipv4:
enabled: true
dhcp: true
VLAN Configuration
# Example: Configure VLAN interfaces
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: vlan-policy
spec:
desiredState:
interfaces:
- name: enp1s0.100
type: vlan
state: up
vlan:
base-iface: enp1s0
id: 100
ipv4:
enabled: true
address:
- ip: 192.168.100.10
prefix-length: 24
Bridge Configuration
# Example: Configure network bridge
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: bridge-policy
spec:
desiredState:
interfaces:
- name: br0
type: linux-bridge
state: up
bridge:
options:
stp:
enabled: false
port:
- name: enp1s0
ipv4:
enabled: true
dhcp: true
SR-IOV Network Configuration
What is SR-IOV?
Single Root I/O Virtualization (SR-IOV) enables high-performance networking by allowing direct hardware access to network interfaces.
SR-IOV Benefits
-
High Performance: Direct hardware access with minimal CPU overhead
-
Low Latency: Reduced network latency for performance-critical applications
-
Hardware Acceleration: Offload network processing to specialized hardware
-
Isolation: Hardware-level network isolation between workloads
Installing SR-IOV Network Operator
Step 1: Install the Operator
# Install SR-IOV Network Operator via CLI
oc apply -f - <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: sriov-network-operator-subscription
namespace: openshift-sriov-network-operator
spec:
channel: stable
name: sriov-network-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
Step 2: Configure SR-IOV Network Node Policy
# Example SR-IOV Network Node Policy
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: policy-intel-nic
namespace: openshift-sriov-network-operator
spec:
resourceName: intel_nics
nodeSelector:
feature.node.kubernetes.io/network-sriov.capable: "true"
priority: 99
numVfs: 8
nicSelector:
vendor: "8086"
deviceID: "158b"
pfNames: ["ens1f0"]
deviceType: netdevice
Multus CNI Configuration
What is Multus CNI?
Multus CNI enables pods to have multiple network interfaces, supporting complex networking requirements.
Creating Network Attachment Definitions
Macvlan Network Attachment
# Example Macvlan Network Attachment Definition
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: |
{
"cniVersion": "0.3.1",
"type": "macvlan",
"master": "enp1s0",
"mode": "bridge",
"ipam": {
"type": "static",
"addresses": [
{
"address": "192.168.1.100/24",
"gateway": "192.168.1.1"
}
]
}
}
Network Security and Policies
Network Policies in OpenShift 4.18/4.19
Network policies provide microsegmentation and traffic control capabilities.
Default Deny Network Policy
# Deny all ingress traffic by default
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
Allow Specific Traffic
# Allow traffic from specific namespaces
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-namespace
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: frontend
ports:
- protocol: TCP
port: 8080
Egress Network Policy
# Control outbound traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
- to: []
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
IPsec Encryption Configuration
Enable IPsec encryption for pod-to-pod communication:
# Enable IPsec encryption
oc patch networks.operator.openshift.io cluster --type=merge \
-p='{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipsecConfig":{}}}}}'
# Verify IPsec configuration
oc get network.operator cluster -o yaml
Load Balancer and Ingress Configuration
OpenShift Router and Ingress
OpenShift 4.18/4.19 provides advanced ingress capabilities through the Ingress Operator.
Default Ingress Controller Configuration
# Configure default ingress controller
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: default
namespace: openshift-ingress-operator
spec:
domain: apps.cluster.example.com
endpointPublishingStrategy:
type: LoadBalancerService
replicas: 3
nodePlacement:
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker: ""
Custom Ingress Controller
# Create custom ingress controller for specific workloads
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: custom-ingress
namespace: openshift-ingress-operator
spec:
domain: custom.example.com
routeSelector:
matchLabels:
type: custom
nodePlacement:
nodeSelector:
matchLabels:
ingress: custom
External Load Balancer Configuration
For bare metal deployments, configure external load balancers:
HAProxy Configuration Example
# Example HAProxy configuration for OpenShift
global
log stdout local0
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
# API Load Balancer
frontend api-frontend
bind *:6443
mode tcp
default_backend api-backend
backend api-backend
mode tcp
balance roundrobin
server master-0 192.168.1.10:6443 check
server master-1 192.168.1.11:6443 check
server master-2 192.168.1.12:6443 check
# Application Ingress Load Balancer
frontend apps-frontend
bind *:80
bind *:443
mode tcp
default_backend apps-backend
backend apps-backend
mode tcp
balance roundrobin
server worker-0 192.168.1.20:80 check
server worker-1 192.168.1.21:80 check
server worker-2 192.168.1.22:80 check
DNS Configuration for OpenShift 4.18/4.19
DNS Requirements
Proper DNS configuration is critical for OpenShift cluster operation:
Required DNS Records
-
API Endpoints:
-
api.<cluster-name>.<domain> → Load balancer VIP
-
api-int.<cluster-name>.<domain> → Internal API access
-
-
Application Ingress:
-
*.apps.<cluster-name>.<domain> → Ingress load balancer VIP
-
-
Node Records:
-
<hostname>.<cluster-name>.<domain> → Node IP addresses
-
-
etcd Records (optional but recommended):
-
etcd-<index>.<cluster-name>.<domain> → Control plane node IPs
-
DNS Configuration Example
# Example DNS zone configuration
$ORIGIN example.com.
$TTL 300
; Cluster API endpoints
api.cluster IN A 192.168.1.100
api-int.cluster IN A 192.168.1.100
; Application ingress wildcard
*.apps.cluster IN A 192.168.1.101
; Node records
master-0.cluster IN A 192.168.1.10
master-1.cluster IN A 192.168.1.11
master-2.cluster IN A 192.168.1.12
worker-0.cluster IN A 192.168.1.20
worker-1.cluster IN A 192.168.1.21
worker-2.cluster IN A 192.168.1.22
; etcd records
etcd-0.cluster IN A 192.168.1.10
etcd-1.cluster IN A 192.168.1.11
etcd-2.cluster IN A 192.168.1.12
CoreDNS Configuration
OpenShift uses CoreDNS for internal cluster DNS resolution:
# Custom CoreDNS configuration
apiVersion: operator.openshift.io/v1
kind: DNS
metadata:
name: default
spec:
servers:
- name: example-server
zones:
- example.com
forwardPlugin:
upstreams:
- 192.168.1.1
- 192.168.1.2
Network Troubleshooting and Monitoring
Network Diagnostics Tools
OpenShift 4.18/4.19 provides various tools for network troubleshooting:
Network Monitoring and Metrics
Monitor network performance and health:
-
Prometheus Metrics: Network performance metrics
-
Flow Monitoring: Network traffic analysis
-
Policy Violations: Network policy enforcement monitoring
-
Ingress Metrics: Application traffic monitoring
Best Practices for OpenShift Networking
Network Design Principles
-
Segmentation: Implement proper network segmentation for security
-
Redundancy: Design for high availability and fault tolerance
-
Performance: Optimize network configuration for workload requirements
-
Security: Implement defense-in-depth networking security
Next Steps
Ready to explore optional features like OpenShift Virtualization and AI? Continue to Module 7: Optional Features.