Kubernetes : Use External Storage2023/10/20 |
Configure Persistent Storage in Kubernetes Cluster. This example is based on the environment like follows. +----------------------+ +----------------------+ | [ mgr.srv.world ] | | [ dlp.srv.world ] | | Manager Node | | Control Plane | +-----------+----------+ +-----------+----------+ eth0|10.0.0.25 eth0|10.0.0.30 | | ------------+--------------------------+----------- | | eth0|10.0.0.51 eth0|10.0.0.52 +-----------+----------+ +-----------+----------+ | [ node01.srv.world ] | | [ node02.srv.world ] | | Worker Node#1 | | Worker Node#2 | +----------------------+ +----------------------+ |
For example, configure cluster that pods can use NFS share as external storage on NFS server [nfs.srv.world (10.0.0.35)]. |
|
[1] |
Configure NFS Server, refer to here. |
[2] | Create PV (Persistent Volume) object and PVC (Persistent Volume Claim) object. |
apiVersion: v1 kind: PersistentVolume metadata: # any PV name name: nfs-pv spec: capacity: # storage size storage: 10Gi accessModes: # Access Modes: # - ReadWriteMany (RW from multi nodes) # - ReadWriteOnce (RW from a node) # - ReadOnlyMany (R from multi nodes) - ReadWriteMany persistentVolumeReclaimPolicy: # retain even if pods terminate Retain nfs: # NFS server definition path: /home/nfsshare server: 10.0.0.35 readOnly: false kubectl create -f nfs-pv.yml persistentvolume "nfs-pv" created [root@mgr ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv 10Gi RWX Retain Available 5s
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# any PVC name
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
kubectl create -f nfs-pvc.yml persistentvolumeclaim "nfs-pvc" created [root@mgr ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-pvc Bound nfs-pv 10Gi RWX 4s |
[3] | Create Pods with PVC above. |
[root@mgr ~]#
vi nginx-nfs.yml apiVersion: apps/v1 kind: Deployment metadata: # any Deployment name name: nginx-nfs labels: name: nginx-nfs spec: replicas: 3 selector: matchLabels: app: nginx-nfs template: metadata: labels: app: nginx-nfs spec: containers: - name: nginx-nfs image: nginx ports: - name: web containerPort: 80 volumeMounts: - name: nfs-share # mount point in container mountPath: /usr/share/nginx/html volumes: - name: nfs-share persistentVolumeClaim: # PVC name you created claimName: nfs-pvc
[root@mgr ~]#
[root@mgr ~]# kubectl apply -f nginx-nfs.yml deployment.apps/nginx-nfs created kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-nfs-6975cff5d6-22vzd 1/1 Running 0 75s 192.168.241.135 node02.srv.world <none> <none> nginx-nfs-6975cff5d6-jwzrn 1/1 Running 0 75s 192.168.40.194 node01.srv.world <none> <none> nginx-nfs-6975cff5d6-zn56r 1/1 Running 0 75s 192.168.241.136 node02.srv.world <none> <none>
[root@mgr ~]#
kubectl expose deployment nginx-nfs --type="NodePort" --port 80 service/nginx-nfs exposed [root@mgr ~]# kubectl port-forward service/nginx-nfs --address 127.0.0.1 80:80 &
# create a test file under the NFS share [root@mgr ~]# kubectl exec nginx-nfs-6975cff5d6-22vzd -- sh -c "echo 'NFS Persistent Storage Test' > /usr/share/nginx/html/index.html"
# verify access [root@mgr ~]# curl localhost Handling connection for 80 NFS Persistent Storage Test |
Sponsored Link |
|