When you deploy in k8s a pod depending in persistent volume attached to block storage (for example this post), the volume created is mounted on a specific node. If that node fails or is stopped, the pods running on it fail when trying to be created in other node according to the replication policies they have, because other nodes do not have the disk mounted.
Oracle Cloud Infrastructure (OCI) File Systems are shared storage that you can easily expose/attach to your pods for those use cases where shared persistent data is needed.
Of course, you can still mount the disk in each node, but this is not a good approach because we have a better way to achieve it.
So, let’s get started
Go to OCI dashboard, create a new File System and a new Export called /myexport. Click on the mount target link and take note of the File System IP address.
Download and deploy the following yaml, it creates:
- a persistentVolume
- a persistentVolumeClaim
- a deployment with 3 replicas of a container image with nginx in it
- a service with public IP and LoadBalancer with round-robin policy
kubectl apply -f k8spod2nfs.yaml
Get a list of the pods and “ssh” to one of them
kubectl get pods
NAME READY STATUS RESTARTS AGE k8spod2nfs-xxx 1/1 Running 0 99s k8spod2nfs-yyy 1/1 Running 0 99s k8spod2nfs-zzz 1/1 Running 0 99s
kubectl exec -it k8spod2nfs-xxx bash
Go to /usr/share/nginx/html directory and create or edit a file called index.html
cd /usr/share/nginx/html/ echo hola > index.html
You can also “ssh” to other pod and verify that you see the same file
Now get the list of services and grab the public IP of the k8spod2nfs service
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE k8spod2nfs LoadBalancer 10.96.128.9 x.y.z.t 80:31014/TCP 10m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d4h
Go to http://yourserviceip in your browser:
Change the content of the index.html file:
echo adios >> index.html
Go to http://yourserviceip in your browser again, the data changes
Delete all pods of the k8spod2nfs deployment and wait until at lest one of them are recreated again:
kubectl delete pod -l app=k8spod2nfs
Go to http://yourserviceip in your browser again, the data is still in there!
Delete the deployment and create it again:
kubectl delete -f k8spod2nfs.yaml kubectl apply -f k8spod2nfs.yaml
Wait until al least one of the ñods are ready and go to http://yourserviceip in your browser again, the data is still in there!
As you can see, the data is shared across all pods and is persistent! (unless you delete it or destroy the OCI FIleSystem)
Hope it helps! 🙂
One Comment