Tagged OCI

Set Up an Ingress Controller in OKE Cluster


Step 1: setting up the nginx ingress controller

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

Step 2: exposing the ingress as a service of type LoadBalancer (as a public IP)

kubectl apply -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/cloud-generic.yaml

Step 3: Execute this command several times until the external-ip appears as non <pending>, grab the IP for later testing

kubectl get svc -n ingress-nginx

Step 4: a certificate for ssl termination

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"


kubectl create secret tls tls-secret --key tls.key --cert tls.crtopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"

Step 5

kubectl create -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/hello-world-ingress.yaml

Step 6: A typical hello world deployment with a pod in it and 3 replicas

kubectl create -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/ingress.yaml

Step 7: Test the url and see what happens…

That’s all folks! Hope it helps 🙂

Deploying a Coherence Cluster in Kubernetes


Coherence-Operator is a Kubernetes opeartor for deploying Oracle Coherence in k8s. Let’s see how to do it.

1 Clean previous setup intents:

helm del --purge sample-coherence
helm del --purge sample-coherence-operator
kubectl delete namespace sample-coherence-ns

2 Execute the following:

kubectl config set-context $(kubectl config current-context) --namespace=sample-coherence-ns

helm repo add coherence https://oracle.github.io/coherence-operator/charts

helm repo update

helm --debug install coherence/coherence-operator --name sample-coherence-operator --set "targetNamespaces={}" --set imagePullSecrets=sample-coherence-secret

helm ls

helm status sample-coherence-operator

3 Create a secret with your credentials to Oracle Container Registry:

kubectl create secret docker-registry oracle-container-registry-secret --docker-server=container-registry.oracle.com --docker-username='youruser' --docker-password='yourpasswd' --docker-email='youremail'

4 Install:

helm --debug install coherence/coherence --name sample-coherence --set imagePullSecrets=oracle-container-registry-secret

5 Proxy to a pod:

export POD_NAME=$(kubectl get pods --namespace sample-coherence-ns -l "app=coherence,release=sample-coherence,component=coherencePod" -o jsonpath="{.items[0].metadata.name}")

kubectl --namespace sample-coherence-ns port-forward $POD_NAME 20000:20000

6 Download and install Coherence Stand Alone

7 Download this and this

8 Build the client:

export COHERENCE_HOME=~/Oracle/Middleware/Oracle_Home/coherence
javac -cp .:${COHERENCE_HOME}/lib/coherence.jar HelloCoherence.javaexport COHERENCE_HOME=~/Oracle/Middleware/Oracle_Home/coherence

9 Test it:

java -cp .:${COHERENCE_HOME}/lib/coherence.jar -Dcoherence.cacheconfig=$PWD/example-client-config.xml HelloCoherence

2019-07-11 01:21:33.575/0.538 Oracle Coh...

…

Oracle Coherence Version 12.2.1.3.0 Build 68243

2019-07-11 01:21:34.430/1.392 Oracle 

The value of the key is 7

That’s all, hope it helps 🙂

Shared Disk Seen by Pods Deployed in two Independent OKE Clusters across two Cloud Regions | Remote Network Peering


In this episode we are creating 2 independent OKE clusters, one in Frankfurt and the other in Phoenix, and then we will create a File System in Frankfurt (kind of NFS server) that will act as repository for a shared persistent volume reachable by all the pods of a deployment deployed to both clusters.

Remote Peering

Oracle Cloud Infrastructure networking provides “Remote Peering”, which allows the connection between networks (Virtual Cloud Networks -VCN-) in two different cloud regions.

Peering the 2 VCN’s

Let’s create one VCN in Frankfurt and other in another region, Phoenix in my case.

IMPORTANT: VCN CIDR’s must not overlap!

Now create a DRG in Frankfurt, then create a Remote Peering Connection (RPC):

Do the same in Phoenix and grab the OCID of the new RPC created, we’ll need it in the next step:

Come back to the RPC in Frankfurt, click [Establish Connection], select the region and paste the OCID of the remote RPC:

After a while you should be the status PEERED in both RPC’s:

Now, attach the VCN to the DRG in both sides:

So far, sogood! The 2 VCN’s are peered, now let’s manage how the networks can reach each other, how? – by routing them! We are going to create the routes with OCI CLI (because at the moment of this writing I wasn’t able to create them with the GUI). To do it grab previously the following info from both regions:

  • compartment-OCID
  • vcn-OCID
  • drg-OCIID

Now let’s create a route from fra to phx:

oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"192.168.0.0/16","networkEntityId":"[yourdrgocid]"}]'

And now from phx to fra:

oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"10.0.0.0/16","networkEntityId":"[yourdrgocid]"}]' --region us-phoenix-1

Please note the CIDR block parameter I’ve put (remarked in bold), which in this case is the CIDR of the whole VCN, because we want to route to all the subnets on each side. The routes created look like this:

Now we must modify the routes created on each region and add a rule for the nodes in the private subnet so that they can reach the internet via a NAT gateway, because if not, k8s can’t reach the docker container repo I’m using (you must create one NAT gateway on each side just in case you have not already done it):

Now we must assign to each one of the private subnets on each VCN the route tables created on each region:

Now create a K8S cluster on each region (use the custom option because you must select the VCN’s you have created previously).

Now follow this post to create the shared file system in Frankfurt.

One more thing, configure security list to allow traffic, NFS ports are:

UDP: 111,1039,1047,1048,2049
TCP: 111,1039,1047,1048,2049

Soo faaar sooo goood! We have the VCN networks peered, DRG’s created, VCN’s attached to DRG’s, routes created, NFS traffic allowed, storage ready and k8s clustered created!

Finally deploy this to both k8s clusters. NOTE: modify the yaml with the specific values of IP and export of your own File System:

kubectl apply -f k8spod2nfs.yaml

Now ssh to one of the pods in the Phoenix cluster and verify you can see the shared content. Then modify the index.html file and dump the content of the file. Finally get the public IP of the service created.

MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
k8spod2nfs-6c6665479f-b6q8j   1/1     Running   0          1m
k8spod2nfs-6c6665479f-md5s5   1/1     Running   0          1m
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6c6665479f-b6q8j bash
root@k8spod2nfs-6c6665479f-b6q8j:/# cd /usr/share/nginx/html/
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# ls
file.txt  index.html  index.html.1
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# echo hi >> index.html
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# cat index.html
hola
adios
hi
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# exit
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
k8spod2nfs   LoadBalancer   10.96.133.154   129.146.208.7   80:31728/TCP   8m
kubernetes   ClusterIP      10.96.0.1                 443/TCP        48m

Open a browser and put the public IP of the service:

Now get the public IP of the service created in Frankfurt, open a browser and see what happens:

It shows same content, awesome!!

And last, just in case you don’t trust, change again the content of index.html within a pod in the Frankfurt side:

MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6f48c6464f-2447d bash
root@k8spod2nfs-6f48c6464f-2447d:/# cd /usr/share/nginx/html/
root@k8spod2nfs-6f48c6464f-2447d:/usr/share/nginx/html# echo bye >> index.html

It seems is working fine, haha!!

That’s all folks! 🙂

Shared Disk for your Pods: PersistentVolumes for Oracle Kubernetes Engine (OKE) Implemented as NFS File Storage in Oracle Cloud Infrastructure (OCI)


When you deploy in k8s a pod depending in persistent volume attached to block storage (for example this post), the volume created is mounted on a specific node. If that node fails or is stopped, the pods running on it fail when trying to be created in other node according to the replication policies they have, because other nodes do not have the disk mounted.

Oracle Cloud Infrastructure (OCI) File Systems are shared storage that you can easily expose/attach to your pods for those use cases where shared persistent data is needed.

Of course, you can still mount the disk in each node, but this is not a good approach because we have a better way to achieve it.

So, let’s get started

Go to OCI dashboard, create a new File System and a new Export called /myexport. Click on the mount target link and take note of the File System IP address.

 

This slideshow requires JavaScript.

Download and deploy the following yaml, it creates:

  • a persistentVolume
  • a persistentVolumeClaim
  • a deployment with 3 replicas of a container image with nginx in it
  • a service with public IP and LoadBalancer with round-robin policy
kubectl apply -f k8spod2nfs.yaml

Get a list of the pods and “ssh” to one of them

kubectl get pods
NAME READY STATUS RESTARTS AGE
k8spod2nfs-xxx 1/1 Running 0 99s
k8spod2nfs-yyy 1/1 Running 0 99s
k8spod2nfs-zzz 1/1 Running 0 99s
kubectl exec -it k8spod2nfs-xxx bash

Go to /usr/share/nginx/html directory and create or edit a file called index.html

cd /usr/share/nginx/html/

echo hola > index.html

You can also “ssh” to other pod and verify that you see the same file

Now get the list of services and grab the public IP of the k8spod2nfs service

kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
k8spod2nfs LoadBalancer 10.96.128.9 x.y.z.t 80:31014/TCP 10m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d4h

Go to http://yourserviceip in your browser:

k8spod2nfs

Change the content of the index.html file:

echo adios >> index.html

Go to http://yourserviceip in your browser again, the data changes

k8spod2nfs1

Delete all pods of the k8spod2nfs deployment and wait until at lest one of them are recreated again:

kubectl delete pod -l app=k8spod2nfs

Go to http://yourserviceip in your browser again, the data is still in there!

Delete the deployment and create it again:

kubectl delete -f k8spod2nfs.yaml

kubectl apply -f k8spod2nfs.yaml

Wait until al least one of the ñods are ready and go to http://yourserviceip in your browser again, the data is still in there!

As you can see, the data is shared across all pods and is persistent! (unless you delete it or destroy the OCI FIleSystem)

Hope it helps! 🙂

 

 

 

BucketNotEmpty – Bucket named ‘xxxx’ is not empty. Delete all objects first


bucketdelete

Oracle OCI object storage “buckets” can’t be deleted from OCI dashboard unless they are empty… and no empty option menu exists at all (at least at the time of this post).

Anyway, you can do it using the CLI… Follow these steps:

If you don’t have OCI CLI installed follow this post

oci os object bulk-delete-ns <identitydomainname>  -bn <bucketname> 
oci os bucket delete --bucket-name <bucketname>

Hope it helps! 😉

 

Oracle Cloud Infrastructure Plugin for Grafana


graf.png

Install Grafana locally as shown here

Install OCI CLI, follow this post as an example

Install OCI plugin

grafana-cli plugins install oci-datasource

brew services restart grafana

Create a dashboard

Select Region, Compartment, etc…

oci2

oci5.png

Tha’s all!

Enjoy 😉