Web site deployed on 2 Kubernetes clusters in Frankfurt and Ashburn connected through an internal remote peering connection to a WebLogic+OracleDB backend deployed in Frankfurt

This is a 3 tier application with a front web with lots of static content served by Apache connected to a Java backend exposing services and application logic deployed on a WebLogic cluster connected as well to an Oracle Autonomous database for data storage and other business logic. WebLogic cluster is deployed on k8s using WebLogic Kubernetes Operator.

The application layers are deployed in private subnets of a virtual cloud network. A public load balancer provides access to the apache layer.

In this post, we are distributing the front web layer in two cloud regions, in separate Kubernetes clusters. In addition, we are connecting both web fronts to a WebLogic cluster running in one region and connected to a database. The connection from the remote web front to the local Weblogic cluster is done through a remote peering connection.

In a typical Kubernetes deployment, the node workers are in a private subnet and the load balancers are in a public subnet. This way, when we create a LoadBalancer service in Kubernetes, it is exposed as a public cloud load balancer in the cloud infra.

But in our architecture, we want connect the web frontends through a peering that connects the clouds regions internally, therefore we need the services be exposed in private subnets.

So, the interesting thing here is that the Kubernetes clusters are completely internal and hence, not reachable from the internet. For that reason we have put a public load balancer on top of the architectureon each region so that the web sites can be reached from the internet.

In order to reach the WebLogic cluster from all the front web deployments we expose the WebLogic cluster service as type LoadBalancer and, because the k8s cluster load balancer network is private, a private load balancer in the cloud infrastructure will be created.

To achieve this, we must edit the k8s service or create a deployment putting a couple of annotations as follows:

The first annotation tells the engine that the cloud load balancer is going to be internal, the second is the OCID of the subnet in with the Kubernetes cluster will create the cloud load balancer.

    service.beta.kubernetes.io/oci-load-balancer-internal: "true"
    service.beta.kubernetes.io/oci-load-balancer-subnet1: ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaa3i...uhizfa
    weblogic.sha256: fe2607c...c2625
  creationTimestamp: "2020-03-29T16:33:09Z"

Topology in the Frankfurt region

As the Apache front end and the WebLogic cluster are deployed in different namespaces in the same k8s cluster, the connection between them is through the ClusterIP internal to the k8s cluster.

Topology in Ashburn region

As both regions are peered and traffic between them is routed, the Apache pods in Ashburn are connected to the cloud loadbancer network of the Frankfurt cluster.

So far, so good. We have connected the 2 web frontends in different regions to the same backend in one of the regions regions.

Hope it helps and stay home! 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: