WebLogic Kubernetes Operator: Deploying a Java App in a WebLogic Domain on Oracle Kubernetes Engine (OKE) in 30 Minutes


WebLogic Kubernetes Operator provides a way of running WLS domains in a k8s cluster.

For this post we are depicting the steps of the tutorial you can find in the documentation here. So let’s get started!

What you need:

  • a k8s cluster
  • kubectl
  • maven
  • git
  • docker
  • 60 minutes
git clone https://github.com/oracle/weblogic-kubernetes-operator

docker login

docker pull oracle/weblogic-kubernetes-operator:2.2.0

docker pull traefik:1.7.6

For the next step, if you don’t have a user, goto https://container-registry.oracle.com and register yourself

docker login container-registry.oracle.com 

docker pull container-registry.oracle.com/middleware/weblogic:12.2.1.3

K8s uses role based access control (RBAC):

cat <<EOF | kubectl apply -f -
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: helm-user-cluster-admin-role
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
 subjects:
 kind: ServiceAccount
 name: default
 namespace: kube-system
 EOF

Traefik is a router:

helm install stable/traefik \
   --name traefik-operator \
   --namespace traefik \
   --values kubernetes/samples/charts/traefik/values.yaml  \
   --set "kubernetes.namespaces={traefik}" \
   --wait
cat <<EOF < values.yaml
 serviceType: NodePort
 service:
   nodePorts:
     http: "30305"
     https: "30443"
 dashboard:
   enabled: true
   domain: traefik.example.com
 rbac:
   enabled: true
 ssl:
   enabled: true
   #enforced: true 
   #upstream: true
   #insecureSkipVerify: false
   tlsMinVersion: VersionTLS12
 EOF
helm install stable/traefik --name traefik-operator --namespace traefik --values values.yaml  --set "kubernetes.namespaces={traefik}" --wait

Namespace for the operator:

kubectl create namespace sample-weblogic-operator-ns

kubectl create serviceaccount -n sample-weblogic-operator-ns sample-weblogic-operator-sa
cd weblogic-kubernetes-operator/

helm install kubernetes/charts/weblogic-operator \
   --name sample-weblogic-operator \
   --namespace sample-weblogic-operator-ns \
   --set image=oracle/weblogic-kubernetes-operator:2.2.0 \
   --set serviceAccount=sample-weblogic-operator-sa \
   --set "domainNamespaces={}" \
   --wait
kubectl create namespace sample-domain1-ns

helm upgrade \
   --reuse-values \
   --set "domainNamespaces={sample-domain1-ns}" \
   --wait \
   sample-weblogic-operator \
   kubernetes/charts/weblogic-operator
 
helm upgrade \
   --reuse-values \
   --set "kubernetes.namespaces={traefik,sample-domain1-ns}" \
   --wait \
   traefik-operator \
   stable/traefik

Creating the WLS domain image:

kubernetes/samples/scripts/create-weblogic-domain-credentials/create-weblogic-credentials.sh \
   -u weblogic -p welcome1 -n sample-domain1-ns -d sample-domain1

Tag the docker image created and push to a registry:

docker images

docker tag container-registry.oracle.com/middleware/weblogic:12.2.1.3 javiermugueta/weblogic:12.2.1.3

docker push javiermugueta/weblogic:12.2.1.3

NOTE: Remember to make private this image in the registry!!! As a recommended option, please follow the steps here to push to the private registry offered by Oracle.

Now let’s make a copy of the yaml file with properties to change and put the appropriate values:

cp kubernetes/samples/scripts/create-weblogic-domain/domain-home-in-image/create-domain-inputs.yaml .

mv create-domain-inputs.yaml mycreate-domain-inputs.yaml

vi mycreate-domain-inputs.yaml

(change values in lines #16, #57, #65, #70, #104, #107 appropriately) Here the one I utilised just in case it helps

And now let’s create the domain with the image:

cd kubernetes/samples/scripts/create-weblogic-domain/domain-home-in-image

./create-domain.sh -i ~/Downloads/weblogic-kubernetes-operator/mycreate-domain-inputs.yaml -o ~/Downloads/weblogic-kubernetes-operator/output -u weblogic -p welcome1 -e

Verify that everything ig working!

kubectl get po -ns sample-domain1-ns

kubectl get svc -ns sample-domain1-ns

Change the type of the cluster and adminserver services to LoadBalancer:

kubectl edit svc/sample-domain1-cluster-cluster-1 -n sample-domain1-ns

kubectl edit svc/sample-domain1-admin-server-external -n sample-domain1-ns
Use vi commands

Verify and write down the public IP’s of the AdminServer external service and the cluster:

kubectl get svc -ns sample-domain1-ns

Create a simple java app and package it:

mvn archetype:generate -DgroupId=javiermugueta.blog -DartifactId=java-web-project -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

mnv package

Open a browser, log in WLS AdminServer console and deploy your app (use the public IP of the AdminsServer service):

Open a new browser tab and test the app (use the public IP of the WLS cluster service):

That’s all folks, hope it helps!! 🙂

Shared Disk Seen by Pods Deployed in two Independent OKE Clusters across two Cloud Regions | Remote Network Peering


In this episode we are creating 2 independent OKE clusters, one in Frankfurt and the other in Phoenix, and then we will create a File System in Frankfurt (kind of NFS server) that will act as repository for a shared persistent volume reachable by all the pods of a deployment deployed to both clusters.

Remote Peering

Oracle Cloud Infrastructure networking provides “Remote Peering”, which allows the connection between networks (Virtual Cloud Networks -VCN-) in two different cloud regions.

Peering the 2 VCN’s

Let’s create one VCN in Frankfurt and other in another region, Phoenix in my case.

IMPORTANT: VCN CIDR’s must not overlap!

Now create a DRG in Frankfurt, then create a Remote Peering Connection (RPC):

Do the same in Phoenix and grab the OCID of the new RPC created, we’ll need it in the next step:

Come back to the RPC in Frankfurt, click [Establish Connection], select the region and paste the OCID of the remote RPC:

After a while you should be the status PEERED in both RPC’s:

Now, attach the VCN to the DRG in both sides:

So far, sogood! The 2 VCN’s are peered, now let’s manage how the networks can reach each other, how? – by routing them! We are going to create the routes with OCI CLI (because at the moment of this writing I wasn’t able to create them with the GUI). To do it grab previously the following info from both regions:

  • compartment-OCID
  • vcn-OCID
  • drg-OCIID

Now let’s create a route from fra to phx:

oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"192.168.0.0/16","networkEntityId":"[yourdrgocid]"}]'

And now from phx to fra:

oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"10.0.0.0/16","networkEntityId":"[yourdrgocid]"}]' --region us-phoenix-1

Please note the CIDR block parameter I’ve put (remarked in bold), which in this case is the CIDR of the whole VCN, because we want to route to all the subnets on each side. The routes created look like this:

Now we must modify the routes created on each region and add a rule for the nodes in the private subnet so that they can reach the internet via a NAT gateway, because if not, k8s can’t reach the docker container repo I’m using (you must create one NAT gateway on each side just in case you have not already done it):

Now we must assign to each one of the private subnets on each VCN the route tables created on each region:

Now create a K8S cluster on each region (use the custom option because you must select the VCN’s you have created previously).

Now follow this post to create the shared file system in Frankfurt.

One more thing, configure security list to allow traffic, NFS ports are:

UDP: 111,1039,1047,1048,2049
TCP: 111,1039,1047,1048,2049

Soo faaar sooo goood! We have the VCN networks peered, DRG’s created, VCN’s attached to DRG’s, routes created, NFS traffic allowed, storage ready and k8s clustered created!

Finally deploy this to both k8s clusters. NOTE: modify the yaml with the specific values of IP and export of your own File System:

kubectl apply -f k8spod2nfs.yaml

Now ssh to one of the pods in the Phoenix cluster and verify you can see the shared content. Then modify the index.html file and dump the content of the file. Finally get the public IP of the service created.

MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
k8spod2nfs-6c6665479f-b6q8j   1/1     Running   0          1m
k8spod2nfs-6c6665479f-md5s5   1/1     Running   0          1m
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6c6665479f-b6q8j bash
root@k8spod2nfs-6c6665479f-b6q8j:/# cd /usr/share/nginx/html/
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# ls
file.txt  index.html  index.html.1
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# echo hi >> index.html
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# cat index.html
hola
adios
hi
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# exit
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
k8spod2nfs   LoadBalancer   10.96.133.154   129.146.208.7   80:31728/TCP   8m
kubernetes   ClusterIP      10.96.0.1                 443/TCP        48m

Open a browser and put the public IP of the service:

Now get the public IP of the service created in Frankfurt, open a browser and see what happens:

It shows same content, awesome!!

And last, just in case you don’t trust, change again the content of index.html within a pod in the Frankfurt side:

MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6f48c6464f-2447d bash
root@k8spod2nfs-6f48c6464f-2447d:/# cd /usr/share/nginx/html/
root@k8spod2nfs-6f48c6464f-2447d:/usr/share/nginx/html# echo bye >> index.html

It seems is working fine, haha!!

That’s all folks! 🙂

Scheduling with Developer Cloud Service (DevCS) the Start/Stop Sequence of an Oracle Analytics Cloud (OAC) Instance


Oracle Developer Cloud Service (DevCS) is a CI/CD environment provided by Oracle for cloud customers. It is almost free, you only pay for the storage and the compute utilised when jobs are run.

Oracle Analytics Cloud (OAC) is a powerful and proactive reporting tool.

Builds

Builds are the well known tasks typical in Jenkins:

Start OAC Instance Job

Create a new build job, go to Configure->Steps, from the [Add Step] dropdown listbox select “PSMCli” and “Unix Shell”.

In the PSMCli step provide user, password, identity-domain id (the id, not the name), region and output format.

In the Unix Shell step put:

psm AUTOANALYTICSINST start-service -s [instancename]

Now click on the “Settings” icon (the gears), go to Triggers tab and configure the scheduling utilising cronjob format:

Stop OAC Instance Job

Do same steps than above, except for the command systax which is:

 psm AUTOANALYTICSINST start-service -s [instancename]

And if you need to execute jobs depending ones on the others, you can utilise pipelines!

CLUE

Create a build step, type one of the following and see the output (example):

psm help
psm [servicetype] help

Hope it helps! 🙂

See also this post

Load Balancig, High Availability and Fail-Over of a Micro-Service Deployed in two Separated Kubernetes Clusters: one running in Oracle Kubernetes Engine (OKE) and the other in Google Kubernetes Engine (GKE)


Oracle Cloud Edge Services

Oracle Cloud Infrastructure provides Edge Services, is a group of services related with DNS, Health Checks, Traffic Management and WAF (Web Application Firewall).

In this episode we are utilising DNS Zone Management, Traffic Management Steering Policies and Health Checks for load balancing and fail-over of a micro-service running in two different Kubernetes clusters, in two different regions and distinct cloud providers, giving a robust solution that accomplishes a very powerful load balanced, active-active and disaster recovery topology.

Deploying the micro-service

Deploy the following to two different k8s clusters, such as OKE in two distinct regions or OKE and GKE. As OKE and GKE are petty much identical, we can use kubectl and Kubernetes Dashboard in both of them as we prefer:

k8s deployment in OKE visualised with Kubernetes kubectl
k8s deployment in OKE visualised with Kubernetes dashboard

It is a very simple service that greets you and says where is it running.

Greetings from OKE
Greetings from GKE

Configuring DNS

For this part of the setup we need a FQDN registered, we are using bigdatasport.org, a name registered by myself.

Let’s create domain entries in OCI. Create a DNS zone in OCI as follows:

Now, let’s grab the DNS servers and go to our Registrar and change the DNS’s configuration so that they point to Oracle DNS’s:

Verify the change:

Configuring Health Checks

Let’s create a Health Check that we’ll use later in the traffic management. Health checks are performed external to OCI from a list of vantage points executed in Azure, Google or AWS, select your preferred choice.

Configuring Traffic Management Steering Policies

Let’s create a traffic management policy as follows:

Testing it all

Ok, we have all the tasks already done, let’s test it!

Delete the deployment in OKE:

Go to the Traffic policy and verify that the OKE endpoint is unhealthy:

Go to your browser and request http://bigdatasport.org/greet, as you can see the service is retrieved from GKE:

Redeploy in OKE again:

As you can see, the OKE service is running well again:

Now let’s delete the deployment in GKE:

Now the greeting is retrieved again form OKE:

And that’s all folks, hope it helps! 🙂

Integrating DevCS Notifications with Slack Using WebHooks


One interesting thing regarding CI and DevOps is the ability to be notified when things happen without the need to log in a web app every hour and see what happened.

Oracle Developer Cloud Service (DevCS -a CI/DevOps tool from Oracle Cloud-) can be configured to send notifications to several channels, one of them is slack:

Let’s have a look how to configure it.

Slack Side Configuration

Ask our slack administrators to allow you install “Incoming Webhooks” app

Once you are allowed, install and configure “Incoming Webhooks”, selecting the slack channel you want the notifications to be sent to, name icon, attachments and the like. Finally, grab the Webhook URL for later.

DevCS Side Configuration

Go to DecCS, select a project and go to Administration/Webhooks

Create a new Webhook. Put in the field URL the url created in the slack configuration side and subscribe to the kind notifications you are interested in and clink on [Save]:

Click on [Test] button and verify a test message reaches the slack channel:

Now launch a build, create/edit an announcement or do whatever task that generates notifications and verify the notification reaches the channel:

That’s all, hope it helps! 🙂

Implementing Deploy Steps in DevCS Builds


Classic approach for deploying artefacts to Oracle Cloud Java and Application Container instances was utilising “Deployments”:

a3.png

There is a new option, now you can made deploys in build steps.

For doing that you must first create an “Environment”, watch this video to see how:

a4

After that, go to one of your builds and add an “Oracle Deployment” step:

a5.png

Now put the correct values the same way you did in the “classic” deployment, for example this is a setup for an ACCS deployment:

a6.png

And that’s all, you are done!

Last but not least, remember that you can get benefit of the PIPELINES feature and create visual workflows of steps of any king (compile, test, deploy,..)

a7.png

Hope it helps! 🙂

 

Moving Git Repo from DevCS “classic”to DevCS OCI (in fact you can import whatever git repository you have access to)


I recommend moving to the DevCS OCI “flavour”, it has many advantages. Fortunately is easy to import the repos.

First, click on [+Create Repository]

a1

Second, provide the information coming from the old repo, and click [Create] and in a better of seconds it is imported and you are done!

a2

Hope it helps! 🙂