Tagged ORACLE CLOUD INFRASTRUCTURE

Set Up an Ingress Controller in OKE Cluster


Step 1: setting up the nginx ingress controller

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

Step 2: exposing the ingress as a service of type LoadBalancer (as a public IP)

kubectl apply -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/cloud-generic.yaml

Step 3: Execute this command several times until the external-ip appears as non <pending>, grab the IP for later testing

kubectl get svc -n ingress-nginx

Step 4: a certificate for ssl termination

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"


kubectl create secret tls tls-secret --key tls.key --cert tls.crtopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"

Step 5

kubectl create -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/hello-world-ingress.yaml

Step 6: A typical hello world deployment with a pod in it and 3 replicas

kubectl create -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/ingress.yaml

Step 7: Test the url and see what happens…

That’s all folks! Hope it helps 🙂

WebLogic Kubernetes Operator: Deploying a Java App in a WebLogic Domain on Oracle Kubernetes Engine (OKE) in 30 Minutes


WebLogic Kubernetes Operator provides a way of running WLS domains in a k8s cluster.

For this post we are depicting the steps of the tutorial you can find in the documentation here. So let’s get started!

What you need:

  • a k8s cluster
  • kubectl
  • maven
  • git
  • docker
  • 60 minutes
git clone https://github.com/oracle/weblogic-kubernetes-operator

docker login

docker pull oracle/weblogic-kubernetes-operator:2.2.0

docker pull traefik:1.7.6

For the next step, if you don’t have a user, goto https://container-registry.oracle.com and register yourself

docker login container-registry.oracle.com 

docker pull container-registry.oracle.com/middleware/weblogic:12.2.1.3

K8s uses role based access control (RBAC):

cat <<EOF | kubectl apply -f -
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: helm-user-cluster-admin-role
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
 subjects:
 kind: ServiceAccount
 name: default
 namespace: kube-system
 EOF

Traefik is a router:

helm install stable/traefik \
   --name traefik-operator \
   --namespace traefik \
   --values kubernetes/samples/charts/traefik/values.yaml  \
   --set "kubernetes.namespaces={traefik}" \
   --wait
cat <<EOF < values.yaml
 serviceType: NodePort
 service:
   nodePorts:
     http: "30305"
     https: "30443"
 dashboard:
   enabled: true
   domain: traefik.example.com
 rbac:
   enabled: true
 ssl:
   enabled: true
   #enforced: true 
   #upstream: true
   #insecureSkipVerify: false
   tlsMinVersion: VersionTLS12
 EOF
helm install stable/traefik --name traefik-operator --namespace traefik --values values.yaml  --set "kubernetes.namespaces={traefik}" --wait

Namespace for the operator:

kubectl create namespace sample-weblogic-operator-ns

kubectl create serviceaccount -n sample-weblogic-operator-ns sample-weblogic-operator-sa
cd weblogic-kubernetes-operator/

helm install kubernetes/charts/weblogic-operator \
   --name sample-weblogic-operator \
   --namespace sample-weblogic-operator-ns \
   --set image=oracle/weblogic-kubernetes-operator:2.2.0 \
   --set serviceAccount=sample-weblogic-operator-sa \
   --set "domainNamespaces={}" \
   --wait
kubectl create namespace sample-domain1-ns

helm upgrade \
   --reuse-values \
   --set "domainNamespaces={sample-domain1-ns}" \
   --wait \
   sample-weblogic-operator \
   kubernetes/charts/weblogic-operator
 
helm upgrade \
   --reuse-values \
   --set "kubernetes.namespaces={traefik,sample-domain1-ns}" \
   --wait \
   traefik-operator \
   stable/traefik

Creating the WLS domain image:

kubernetes/samples/scripts/create-weblogic-domain-credentials/create-weblogic-credentials.sh \
   -u weblogic -p welcome1 -n sample-domain1-ns -d sample-domain1

Tag the docker image created and push to a registry:

docker images

docker tag container-registry.oracle.com/middleware/weblogic:12.2.1.3 javiermugueta/weblogic:12.2.1.3

docker push javiermugueta/weblogic:12.2.1.3

NOTE: Remember to make private this image in the registry!!! As a recommended option, please follow the steps here to push to the private registry offered by Oracle.

Now let’s make a copy of the yaml file with properties to change and put the appropriate values:

cp kubernetes/samples/scripts/create-weblogic-domain/domain-home-in-image/create-domain-inputs.yaml .

mv create-domain-inputs.yaml mycreate-domain-inputs.yaml

vi mycreate-domain-inputs.yaml

(change values in lines #16, #57, #65, #70, #104, #107 appropriately) Here the one I utilised just in case it helps

And now let’s create the domain with the image:

cd kubernetes/samples/scripts/create-weblogic-domain/domain-home-in-image

./create-domain.sh -i ~/Downloads/weblogic-kubernetes-operator/mycreate-domain-inputs.yaml -o ~/Downloads/weblogic-kubernetes-operator/output -u weblogic -p welcome1 -e

Verify that everything ig working!

kubectl get po -ns sample-domain1-ns

kubectl get svc -ns sample-domain1-ns

Change the type of the cluster and adminserver services to LoadBalancer:

kubectl edit svc/sample-domain1-cluster-cluster-1 -n sample-domain1-ns

kubectl edit svc/sample-domain1-admin-server-external -n sample-domain1-ns
Use vi commands

Verify and write down the public IP’s of the AdminServer external service and the cluster:

kubectl get svc -ns sample-domain1-ns

Create a simple java app and package it:

mvn archetype:generate -DgroupId=javiermugueta.blog -DartifactId=java-web-project -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

mnv package

Open a browser, log in WLS AdminServer console and deploy your app (use the public IP of the AdminsServer service):

Open a new browser tab and test the app (use the public IP of the WLS cluster service):

That’s all folks, hope it helps!! 🙂

Shared Disk Seen by Pods Deployed in two Independent OKE Clusters across two Cloud Regions | Remote Network Peering


In this episode we are creating 2 independent OKE clusters, one in Frankfurt and the other in Phoenix, and then we will create a File System in Frankfurt (kind of NFS server) that will act as repository for a shared persistent volume reachable by all the pods of a deployment deployed to both clusters.

Remote Peering

Oracle Cloud Infrastructure networking provides “Remote Peering”, which allows the connection between networks (Virtual Cloud Networks -VCN-) in two different cloud regions.

Peering the 2 VCN’s

Let’s create one VCN in Frankfurt and other in another region, Phoenix in my case.

IMPORTANT: VCN CIDR’s must not overlap!

Now create a DRG in Frankfurt, then create a Remote Peering Connection (RPC):

Do the same in Phoenix and grab the OCID of the new RPC created, we’ll need it in the next step:

Come back to the RPC in Frankfurt, click [Establish Connection], select the region and paste the OCID of the remote RPC:

After a while you should be the status PEERED in both RPC’s:

Now, attach the VCN to the DRG in both sides:

So far, sogood! The 2 VCN’s are peered, now let’s manage how the networks can reach each other, how? – by routing them! We are going to create the routes with OCI CLI (because at the moment of this writing I wasn’t able to create them with the GUI). To do it grab previously the following info from both regions:

  • compartment-OCID
  • vcn-OCID
  • drg-OCIID

Now let’s create a route from fra to phx:

oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"192.168.0.0/16","networkEntityId":"[yourdrgocid]"}]'

And now from phx to fra:

oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"10.0.0.0/16","networkEntityId":"[yourdrgocid]"}]' --region us-phoenix-1

Please note the CIDR block parameter I’ve put (remarked in bold), which in this case is the CIDR of the whole VCN, because we want to route to all the subnets on each side. The routes created look like this:

Now we must modify the routes created on each region and add a rule for the nodes in the private subnet so that they can reach the internet via a NAT gateway, because if not, k8s can’t reach the docker container repo I’m using (you must create one NAT gateway on each side just in case you have not already done it):

Now we must assign to each one of the private subnets on each VCN the route tables created on each region:

Now create a K8S cluster on each region (use the custom option because you must select the VCN’s you have created previously).

Now follow this post to create the shared file system in Frankfurt.

One more thing, configure security list to allow traffic, NFS ports are:

UDP: 111,1039,1047,1048,2049
TCP: 111,1039,1047,1048,2049

Soo faaar sooo goood! We have the VCN networks peered, DRG’s created, VCN’s attached to DRG’s, routes created, NFS traffic allowed, storage ready and k8s clustered created!

Finally deploy this to both k8s clusters. NOTE: modify the yaml with the specific values of IP and export of your own File System:

kubectl apply -f k8spod2nfs.yaml

Now ssh to one of the pods in the Phoenix cluster and verify you can see the shared content. Then modify the index.html file and dump the content of the file. Finally get the public IP of the service created.

MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
k8spod2nfs-6c6665479f-b6q8j   1/1     Running   0          1m
k8spod2nfs-6c6665479f-md5s5   1/1     Running   0          1m
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6c6665479f-b6q8j bash
root@k8spod2nfs-6c6665479f-b6q8j:/# cd /usr/share/nginx/html/
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# ls
file.txt  index.html  index.html.1
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# echo hi >> index.html
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# cat index.html
hola
adios
hi
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# exit
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
k8spod2nfs   LoadBalancer   10.96.133.154   129.146.208.7   80:31728/TCP   8m
kubernetes   ClusterIP      10.96.0.1                 443/TCP        48m

Open a browser and put the public IP of the service:

Now get the public IP of the service created in Frankfurt, open a browser and see what happens:

It shows same content, awesome!!

And last, just in case you don’t trust, change again the content of index.html within a pod in the Frankfurt side:

MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6f48c6464f-2447d bash
root@k8spod2nfs-6f48c6464f-2447d:/# cd /usr/share/nginx/html/
root@k8spod2nfs-6f48c6464f-2447d:/usr/share/nginx/html# echo bye >> index.html

It seems is working fine, haha!!

That’s all folks! 🙂

Load Balancig, High Availability and Fail-Over of a Micro-Service Deployed in two Separated Kubernetes Clusters: one running in Oracle Kubernetes Engine (OKE) and the other in Google Kubernetes Engine (GKE)


Oracle Cloud Edge Services

Oracle Cloud Infrastructure provides Edge Services, is a group of services related with DNS, Health Checks, Traffic Management and WAF (Web Application Firewall).

In this episode we are utilising DNS Zone Management, Traffic Management Steering Policies and Health Checks for load balancing and fail-over of a micro-service running in two different Kubernetes clusters, in two different regions and distinct cloud providers, giving a robust solution that accomplishes a very powerful load balanced, active-active and disaster recovery topology.

Deploying the micro-service

Deploy the following to two different k8s clusters, such as OKE in two distinct regions or OKE and GKE. As OKE and GKE are petty much identical, we can use kubectl and Kubernetes Dashboard in both of them as we prefer:

k8s deployment in OKE visualised with Kubernetes kubectl
k8s deployment in OKE visualised with Kubernetes dashboard

It is a very simple service that greets you and says where is it running.

Greetings from OKE
Greetings from GKE

Configuring DNS

For this part of the setup we need a FQDN registered, we are using bigdatasport.org, a name registered by myself.

Let’s create domain entries in OCI. Create a DNS zone in OCI as follows:

Now, let’s grab the DNS servers and go to our Registrar and change the DNS’s configuration so that they point to Oracle DNS’s:

Verify the change:

Configuring Health Checks

Let’s create a Health Check that we’ll use later in the traffic management. Health checks are performed external to OCI from a list of vantage points executed in Azure, Google or AWS, select your preferred choice.

Configuring Traffic Management Steering Policies

Let’s create a traffic management policy as follows:

Testing it all

Ok, we have all the tasks already done, let’s test it!

Delete the deployment in OKE:

Go to the Traffic policy and verify that the OKE endpoint is unhealthy:

Go to your browser and request http://bigdatasport.org/greet, as you can see the service is retrieved from GKE:

Redeploy in OKE again:

As you can see, the OKE service is running well again:

Now let’s delete the deployment in GKE:

Now the greeting is retrieved again form OKE:

And that’s all folks, hope it helps! 🙂

Oracle Kubernetes (OKE): Deploying a Custom Node.js Web Application Integrated with Identity Cloud Service for Unique Single Sign On (SSO) User Experience


In this post we are deploying a custom Node.js web application in Oracle Kubernetes Engine (OKE).

What we want to show is how to configure the custom web application in order to have a unique Single Sing On experience.

First part

Follow this tutorial here explaining how to enable SSO to the web app running locally

Second part

Now we are making small changes to deploy on kubernetes

Create a Dockerfile in the nodejs folder of the cloned project with the following:
FROM oraclelinux:7-slim
WORKDIR /app
ADD . /app
RUN curl --silent --location https://rpm.nodesource.com/setup_11.x | bash -
RUN yum -y install nodejs npm --skip-broken
EXPOSE 3000
CMD ["npm","start"]
Create K8s deployment file as follows:
apiVersion: v1
kind: Service
metadata:
name: idcsnodeapp
spec:
type: LoadBalancer
selector:
app: idcsnodeapp
ports:
- name: client
protocol: TCP
port: 3000
Deploy to k8s:
kubectl apply -f service.yaml
Grab the url of the new external load-balancer service created in k8s and modify the file auth.js with the appropriate values in your cloud environment
var ids = {
oracle: {
"ClientId": "client id of the IdCS app",
"ClientSecret": "client secret of the IdCS app",
"ClientTenant": "tenant id (idcs-xxxxxxxxxxxx)",
"IDCSHost": "https://tenantid.identity.oraclecloud.com",
"AudienceServiceUrl" : "https://tenantid.identity.oraclecloud.com",
"TokenIssuer": "https://identity.oraclecloud.com/",
"scope": "urn:opc:idm:t.user.me openid",
"logoutSufix": "/oauth2/v1/userlogout",
"redirectURL": "http://k8sloadbalancerip:3000/callback",
"LogLevel":"warn",
"ConsoleLog":"True"
}
};
Build the container and push to a repo you have write access to, such as:
docker build -t javiermugueta/idcsnodeapp .
docker push javiermugueta/idcsnodeapp
Modify the IdCS application with the public IP of the k8s load-balancer service
a
Create k8s deployment file as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: idcsnodeapp
labels:
app: idcsnodeapp
spec:
replicas: 1
selector:
matchLabels:
app: idcsnodeapp
strategy:
type: Recreate
template:
metadata:
labels:
app: idcsnodeapp
spec:
containers:
- image: javiermugueta/idcsnodeapp
name: idcsnodeapp
ports:
- containerPort: 3000
name: idcsnodeapp


Deploy to k8s
kubectl apply -f  deployment.yaml
Test the app and verify SSO is working:

This slideshow requires JavaScript.

Hope it helps! 🙂

 

How to ssh to OKE (k8s) Private Node (worker compute node) via Jump Box (Bastion Server)


In OKE typically you create, for redundancy and high availability reasons, a k8s cluster in 5 or more subnets:

  • 2 are public and, in there, is where the public load balancer is deployed, for example one in AD1 and the other in AD3
  • 3 or more are private, and, in there, is where the worker compute nodes are deployed, for example one subnet in AD1, other in AD2, other in AD3 and looping…

If you need to reach one or more compute worker nodes for some reason, you can create a bastion server (jump box) with a public IP and then do the following:

ssh -i privateKey -N -L localhost:2222:k8scomputenode:22 opc@jumpboxpublicip

ssh -i privateKey -p 2222 opc@localhost

Hope it helps! 🙂

 

 

Connecting to OCI DB System with SQLDeveloper via Bastion Box


Recipe for creating a secure connection between sqlDeveloper in our local machine and an Oracle Cloud Infra DB System created in a private subnet of a Virtual Cloud Network network not opened to internet

Steps

  • Create a new DB System and grab the private IP of the database system node

t5

  • Create a compute VM with public IP exposed
  • Open a ssh tunnel this way:
ssh -i privatekeyfile -N -L localhost:1521:dbnodeprivateip:1521 opc@jumpboxpublicip
  • Grab the database connection details

t10

  • Create a connection in sqlDeveloper

t2

  • Test the connection

t1

Hope it helps! 🙂