In this episode we are creating 2 independent OKE clusters, one in Frankfurt and the other in Phoenix, and then we will create a File System in Frankfurt (kind of NFS server) that will act as repository for a shared persistent volume reachable by all the pods of a deployment deployed to both clusters.
Remote Peering
Oracle Cloud Infrastructure networking provides “Remote Peering”, which allows the connection between networks (Virtual Cloud Networks -VCN-) in two different cloud regions.
Peering the 2 VCN’s
Let’s create one VCN in Frankfurt and other in another region, Phoenix in my case.
IMPORTANT: VCN CIDR’s must not overlap!


Now create a DRG in Frankfurt, then create a Remote Peering Connection (RPC):

Do the same in Phoenix and grab the OCID of the new RPC created, we’ll need it in the next step:

Come back to the RPC in Frankfurt, click [Establish Connection], select the region and paste the OCID of the remote RPC:

After a while you should be the status PEERED in both RPC’s:


Now, attach the VCN to the DRG in both sides:

So far, sogood! The 2 VCN’s are peered, now let’s manage how the networks can reach each other, how? – by routing them! We are going to create the routes with OCI CLI (because at the moment of this writing I wasn’t able to create them with the GUI). To do it grab previously the following info from both regions:
- compartment-OCID
- vcn-OCID
- drg-OCIID
Now let’s create a route from fra to phx:
oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"192.168.0.0/16","networkEntityId":"[yourdrgocid]"}]'
And now from phx to fra:
oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"10.0.0.0/16","networkEntityId":"[yourdrgocid]"}]' --region us-phoenix-1
Please note the CIDR block parameter I’ve put (remarked in bold), which in this case is the CIDR of the whole VCN, because we want to route to all the subnets on each side. The routes created look like this:

Now we must modify the routes created on each region and add a rule for the nodes in the private subnet so that they can reach the internet via a NAT gateway, because if not, k8s can’t reach the docker container repo I’m using (you must create one NAT gateway on each side just in case you have not already done it):

Now we must assign to each one of the private subnets on each VCN the route tables created on each region:

Now create a K8S cluster on each region (use the custom option because you must select the VCN’s you have created previously).
Now follow this post to create the shared file system in Frankfurt.
One more thing, configure security list to allow traffic, NFS ports are:
UDP: 111,1039,1047,1048,2049
TCP: 111,1039,1047,1048,2049

Soo faaar sooo goood! We have the VCN networks peered, DRG’s created, VCN’s attached to DRG’s, routes created, NFS traffic allowed, storage ready and k8s clustered created!
Finally deploy this to both k8s clusters. NOTE: modify the yaml with the specific values of IP and export of your own File System:

kubectl apply -f k8spod2nfs.yaml
Now ssh to one of the pods in the Phoenix cluster and verify you can see the shared content. Then modify the index.html file and dump the content of the file. Finally get the public IP of the service created.
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get pods NAME READY STATUS RESTARTS AGE k8spod2nfs-6c6665479f-b6q8j 1/1 Running 0 1m k8spod2nfs-6c6665479f-md5s5 1/1 Running 0 1m MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6c6665479f-b6q8j bash root@k8spod2nfs-6c6665479f-b6q8j:/# cd /usr/share/nginx/html/ root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# ls file.txt index.html index.html.1 root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# echo hi >> index.html root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# cat index.html hola adios hi root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# exit MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE k8spod2nfs LoadBalancer 10.96.133.154 129.146.208.7 80:31728/TCP 8m kubernetes ClusterIP 10.96.0.1 443/TCP 48m
Open a browser and put the public IP of the service:

Now get the public IP of the service created in Frankfurt, open a browser and see what happens:

It shows same content, awesome!!
And last, just in case you don’t trust, change again the content of index.html within a pod in the Frankfurt side:
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6f48c6464f-2447d bash root@k8spod2nfs-6f48c6464f-2447d:/# cd /usr/share/nginx/html/ root@k8spod2nfs-6f48c6464f-2447d:/usr/share/nginx/html# echo bye >> index.html

It seems is working fine, haha!!
That’s all folks! 🙂