Load Balancig, High Availability and Fail-Over of a Micro-Service Deployed in two Separated Kubernetes Clusters: one running in Oracle Kubernetes Engine (OKE) and the other in Google Kubernetes Engine (GKE)

Oracle Cloud Edge Services

Oracle Cloud Infrastructure provides Edge Services, is a group of services related with DNS, Health Checks, Traffic Management and WAF (Web Application Firewall).

In this episode we are utilising DNS Zone Management, Traffic Management Steering Policies and Health Checks for load balancing and fail-over of a micro-service running in two different Kubernetes clusters, in two different regions and distinct cloud providers, giving a robust solution that accomplishes a very powerful load balanced, active-active and disaster recovery topology.

Deploying the micro-service

Deploy the following to two different k8s clusters, such as OKE in two distinct regions or OKE and GKE. As OKE and GKE are petty much identical, we can use kubectl and Kubernetes Dashboard in both of them as we prefer:

k8s deployment in OKE visualised with Kubernetes kubectl
k8s deployment in OKE visualised with Kubernetes dashboard

It is a very simple service that greets you and says where is it running.

Greetings from OKE
Greetings from GKE

Configuring DNS

For this part of the setup we need a FQDN registered, we are using bigdatasport.org, a name registered by myself.

Let’s create domain entries in OCI. Create a DNS zone in OCI as follows:

Now, let’s grab the DNS servers and go to our Registrar and change the DNS’s configuration so that they point to Oracle DNS’s:

Verify the change:

Configuring Health Checks

Let’s create a Health Check that we’ll use later in the traffic management. Health checks are performed external to OCI from a list of vantage points executed in Azure, Google or AWS, select your preferred choice.

Configuring Traffic Management Steering Policies

Let’s create a traffic management policy as follows:

Testing it all

Ok, we have all the tasks already done, let’s test it!

Delete the deployment in OKE:

Go to the Traffic policy and verify that the OKE endpoint is unhealthy:

Go to your browser and request http://bigdatasport.org/greet, as you can see the service is retrieved from GKE:

Redeploy in OKE again:

As you can see, the OKE service is running well again:

Now let’s delete the deployment in GKE:

Now the greeting is retrieved again form OKE:

And that’s all folks, hope it helps! 🙂

Oracle Kubernetes (OKE): Deploying a Custom Node.js Web Application Integrated with Identity Cloud Service for Unique Single Sign On (SSO) User Experience

In this post we are deploying a custom Node.js web application in Oracle Kubernetes Engine (OKE).

What we want to show is how to configure the custom web application in order to have a unique Single Sing On experience.

First part

Follow this tutorial here explaining how to enable SSO to the web app running locally

Second part

Now we are making small changes to deploy on kubernetes

Create a Dockerfile in the nodejs folder of the cloned project with the following:
FROM oraclelinux:7-slim
ADD . /app
RUN curl --silent --location https://rpm.nodesource.com/setup_11.x | bash -
RUN yum -y install nodejs npm --skip-broken
CMD ["npm","start"]
Create K8s deployment file as follows:
apiVersion: v1
kind: Service
name: idcsnodeapp
type: LoadBalancer
app: idcsnodeapp
- name: client
protocol: TCP
port: 3000
Deploy to k8s:
kubectl apply -f service.yaml
Grab the url of the new external load-balancer service created in k8s and modify the file auth.js with the appropriate values in your cloud environment
var ids = {
oracle: {
"ClientId": "client id of the IdCS app",
"ClientSecret": "client secret of the IdCS app",
"ClientTenant": "tenant id (idcs-xxxxxxxxxxxx)",
"IDCSHost": "https://tenantid.identity.oraclecloud.com",
"AudienceServiceUrl" : "https://tenantid.identity.oraclecloud.com",
"TokenIssuer": "https://identity.oraclecloud.com/",
"scope": "urn:opc:idm:t.user.me openid",
"logoutSufix": "/oauth2/v1/userlogout",
"redirectURL": "http://k8sloadbalancerip:3000/callback",
Build the container and push to a repo you have write access to, such as:
docker build -t javiermugueta/idcsnodeapp .
docker push javiermugueta/idcsnodeapp
Modify the IdCS application with the public IP of the k8s load-balancer service
Create k8s deployment file as follows:
apiVersion: apps/v1
kind: Deployment
name: idcsnodeapp
app: idcsnodeapp
replicas: 1
app: idcsnodeapp
type: Recreate
app: idcsnodeapp
- image: javiermugueta/idcsnodeapp
name: idcsnodeapp
- containerPort: 3000
name: idcsnodeapp

Deploy to k8s
kubectl apply -f  deployment.yaml
Test the app and verify SSO is working:

This slideshow requires JavaScript.

Hope it helps! 🙂


How to ssh to OKE (k8s) Private Node (worker compute node) via Jump Box (Bastion Server)

In OKE typically you create, for redundancy and high availability reasons, a k8s cluster in 5 or more subnets:

  • 2 are public and, in there, is where the public load balancer is deployed, for example one in AD1 and the other in AD3
  • 3 or more are private, and, in there, is where the worker compute nodes are deployed, for example one subnet in AD1, other in AD2, other in AD3 and looping…

If you need to reach one or more compute worker nodes for some reason, you can create a bastion server (jump box) with a public IP and then do the following:

ssh -i privateKey -N -L localhost:2222:k8scomputenode:22 opc@jumpboxpublicip

ssh -i privateKey -p 2222 opc@localhost

Hope it helps! 🙂



Connecting to OCI DB System with SQLDeveloper via Bastion Box

Recipe for creating a secure connection between sqlDeveloper in our local machine and an Oracle Cloud Infra DB System created in a private subnet of a Virtual Cloud Network network not opened to internet


  • Create a new DB System and grab the private IP of the database system node


  • Create a compute VM with public IP exposed
  • Open a ssh tunnel this way:
ssh -i privatekeyfile -N -L localhost:1521:dbnodeprivateip:1521 opc@jumpboxpublicip
  • Grab the database connection details


  • Create a connection in sqlDeveloper


  • Test the connection


Hope it helps! 🙂



Creating Route Rule for Oracle OCI VCN Remote Peering : InvalidParameter – routeRules[0].networkEntityId may not be null


When creating a route rule for VCN remote peering between 2 Virtual Cloud Networks in different regions in Oracle OCI using the web console, the UI does not provide a way for selecting the DRG:



Create the route rule with the CLI as follows:

oci network route-table create --compartment-id xxx --vcn-id yyy --route-rules '[{"cidrBlock":"","networkEntityId":"zzz"}]'

xxx is the OCID of the compartment in which you want to create the route rule
yyy is the OCID of the VCN in which you are creating the route rule for peering
zzz id the OCID of the DRG

oci network route-table create --compartment-id ocid1.compartment.oc1..aaaaaaaa3sz43qrfhsjmbibsrc6e7c2ftlt53gfnzifvlow2yoz7hk3ni2jq --vcn-id ocid1.vcn.oc1.eu-frankfurt-1.aaaaaaaaukr3nzw44idcp2o75xzyt5nm6y2bvm5gtdg422p47av3knraggcq  --route-rules '[{"cidrBlock":"","networkEntityId":"ocid1.drg.oc1.eu-frankfurt-1.aaaaaaaafv4gkcdwyywxuzuh4izwehvttpvuvmvmaxlpdgg2berpsjkl5ivq"}]'

Hope it helps 🙂

BucketNotEmpty – Bucket named ‘xxxx’ is not empty. Delete all objects first


Oracle OCI object storage “buckets” can’t be deleted from OCI dashboard unless they are empty… and no empty option menu exists at all (at least at the time of this post).

Anyway, you can do it using the CLI… Follow these steps:

If you don’t have OCI CLI installed follow this post

oci os object bulk-delete-ns <identitydomainname>  -bn <bucketname> 
oci os bucket delete --bucket-name <bucketname>

Hope it helps! 😉


Deploying an Oracle Database with Persistence Enabled in Oracle Kubernetes Engine in Ten Minutes or Less

In a previous post I explained how to create the same thing using an image published in docker registry under my user. Well… that post is not working anymore because I deleted the image for some reasons.

The method exposed here is better because the deployment file pulls the official image published here.

Therefore you only have to create a user, accept the licence agreement (just in case you acknowledge it) and follow the steps explained here.

Hope it helps! 😉