Tagged K8S

“Kool” Kubernetes Client Tools


When working with k8s, you typically have several clusters and a bunch of namespaces per cluster, the following tools can help you manage the stuff with easy.

kubectx + kubens

kubectx allows you to change the context between different k8s clusters

kubens allows you to change between different namespaces in the current cluster context

brew install kubectx

More info here

Kube-ps1

Allows you to see the current context and namespace in the promt

brew install kube-ps1

More info here

Examples

kubectx, kubens and kube-ps1 at a glance

kubeaudit

Great tool for auditing the security settings on your k8s clusters

brew install kubeaudit
kubeaudit allowpe

krew

Krew is a package manager for kubectl (kubectl has an extensibility framework and this tool helps manage extensions)

Hope it helps! 🙂

Set Up an Ingress Controller in OKE Cluster


Step 1: setting up the nginx ingress controller

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

Step 2: exposing the ingress as a service of type LoadBalancer (as a public IP)

kubectl apply -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/cloud-generic.yaml

Step 3: Execute this command several times until the external-ip appears as non <pending>, grab the IP for later testing

kubectl get svc -n ingress-nginx

Step 4: a certificate for ssl termination

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"


kubectl create secret tls tls-secret --key tls.key --cert tls.crtopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"

Step 5

kubectl create -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/hello-world-ingress.yaml

Step 6: A typical hello world deployment with a pod in it and 3 replicas

kubectl create -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/ingress.yaml

Step 7: Test the url and see what happens…

That’s all folks! Hope it helps 🙂

Shared Disk Seen by Pods Deployed in two Independent OKE Clusters across two Cloud Regions | Remote Network Peering


In this episode we are creating 2 independent OKE clusters, one in Frankfurt and the other in Phoenix, and then we will create a File System in Frankfurt (kind of NFS server) that will act as repository for a shared persistent volume reachable by all the pods of a deployment deployed to both clusters.

Remote Peering

Oracle Cloud Infrastructure networking provides “Remote Peering”, which allows the connection between networks (Virtual Cloud Networks -VCN-) in two different cloud regions.

Peering the 2 VCN’s

Let’s create one VCN in Frankfurt and other in another region, Phoenix in my case.

IMPORTANT: VCN CIDR’s must not overlap!

Now create a DRG in Frankfurt, then create a Remote Peering Connection (RPC):

Do the same in Phoenix and grab the OCID of the new RPC created, we’ll need it in the next step:

Come back to the RPC in Frankfurt, click [Establish Connection], select the region and paste the OCID of the remote RPC:

After a while you should be the status PEERED in both RPC’s:

Now, attach the VCN to the DRG in both sides:

So far, sogood! The 2 VCN’s are peered, now let’s manage how the networks can reach each other, how? – by routing them! We are going to create the routes with OCI CLI (because at the moment of this writing I wasn’t able to create them with the GUI). To do it grab previously the following info from both regions:

  • compartment-OCID
  • vcn-OCID
  • drg-OCIID

Now let’s create a route from fra to phx:

oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"192.168.0.0/16","networkEntityId":"[yourdrgocid]"}]'

And now from phx to fra:

oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"10.0.0.0/16","networkEntityId":"[yourdrgocid]"}]' --region us-phoenix-1

Please note the CIDR block parameter I’ve put (remarked in bold), which in this case is the CIDR of the whole VCN, because we want to route to all the subnets on each side. The routes created look like this:

Now we must modify the routes created on each region and add a rule for the nodes in the private subnet so that they can reach the internet via a NAT gateway, because if not, k8s can’t reach the docker container repo I’m using (you must create one NAT gateway on each side just in case you have not already done it):

Now we must assign to each one of the private subnets on each VCN the route tables created on each region:

Now create a K8S cluster on each region (use the custom option because you must select the VCN’s you have created previously).

Now follow this post to create the shared file system in Frankfurt.

One more thing, configure security list to allow traffic, NFS ports are:

UDP: 111,1039,1047,1048,2049
TCP: 111,1039,1047,1048,2049

Soo faaar sooo goood! We have the VCN networks peered, DRG’s created, VCN’s attached to DRG’s, routes created, NFS traffic allowed, storage ready and k8s clustered created!

Finally deploy this to both k8s clusters. NOTE: modify the yaml with the specific values of IP and export of your own File System:

kubectl apply -f k8spod2nfs.yaml

Now ssh to one of the pods in the Phoenix cluster and verify you can see the shared content. Then modify the index.html file and dump the content of the file. Finally get the public IP of the service created.

MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
k8spod2nfs-6c6665479f-b6q8j   1/1     Running   0          1m
k8spod2nfs-6c6665479f-md5s5   1/1     Running   0          1m
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6c6665479f-b6q8j bash
root@k8spod2nfs-6c6665479f-b6q8j:/# cd /usr/share/nginx/html/
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# ls
file.txt  index.html  index.html.1
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# echo hi >> index.html
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# cat index.html
hola
adios
hi
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# exit
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
k8spod2nfs   LoadBalancer   10.96.133.154   129.146.208.7   80:31728/TCP   8m
kubernetes   ClusterIP      10.96.0.1                 443/TCP        48m

Open a browser and put the public IP of the service:

Now get the public IP of the service created in Frankfurt, open a browser and see what happens:

It shows same content, awesome!!

And last, just in case you don’t trust, change again the content of index.html within a pod in the Frankfurt side:

MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6f48c6464f-2447d bash
root@k8spod2nfs-6f48c6464f-2447d:/# cd /usr/share/nginx/html/
root@k8spod2nfs-6f48c6464f-2447d:/usr/share/nginx/html# echo bye >> index.html

It seems is working fine, haha!!

That’s all folks! 🙂

How to ssh to OKE (k8s) Private Node (worker compute node) via Jump Box (Bastion Server)


In OKE typically you create, for redundancy and high availability reasons, a k8s cluster in 5 or more subnets:

  • 2 are public and, in there, is where the public load balancer is deployed, for example one in AD1 and the other in AD3
  • 3 or more are private, and, in there, is where the worker compute nodes are deployed, for example one subnet in AD1, other in AD2, other in AD3 and looping…

If you need to reach one or more compute worker nodes for some reason, you can create a bastion server (jump box) with a public IP and then do the following:

ssh -i privateKey -N -L localhost:2222:k8scomputenode:22 opc@jumpboxpublicip

ssh -i privateKey -p 2222 opc@localhost

Hope it helps! 🙂

 

 

Creating a Fast&Simple Container for Sending Messages to a Topic in Oracle Event Hub Cloud Service (aka OEHCS, which is a Kafka cluster) and Deploying it to Kubernetes Cluster


The container uses 4 environment variables, you can find a container already built for you here

SOURCE CODE OF THE PRODUCER

var sleep = require('system-sleep');

const oehcs_connect_url = process.env.OEHCS_CONNECTURL

const topic_name = process.env.TOPIC_NAME

const num_partitions = process.env.NUM_PARTITIONS

const message = process.env.MESSAGE

var kafka = require('kafka-node'),

HighLevelProducer = kafka.HighLevelProducer,

client = new kafka.KafkaClient({kafkaHost: oehcs_connect_url}),

producer = new HighLevelProducer(client);

var i = 0;

while (i >= 0 ){

var payloads = [{ topic: topic_name, messages: message , partition: i}];

//producer.on('ready', function () {

producer.send(payloads, function (err, data) {

console.log(data);

});

// });

i = i + 1;

if (i > num_partitions -1){

i = 0;

sleep( 1 );

}

}

THE DOCKERFILE

FROM oraclelinux:7-slim
WORKDIR /app
ADD . /app
RUN curl --silent --location https://rpm.nodesource.com/setup_10.x | bash -
RUN yum -y install nodejs npm
CMD ["node","producer-direct.js"]

THE YAML FOR K8S DEPLOYMENT

apiVersion: apps/v1

kind: Deployment

metadata:

name: oehcsnodeproducer-direct

labels:

app: oehcsnodeproducer-direct

spec:

replicas: 1

selector:

matchLabels:

app: oehcsnodeproducer-direct

strategy:

type: Recreate

template:

metadata:

labels:

app: oehcsnodeproducer-direct

spec:

containers:

- image: javiermugueta/oehcsnodeproducer-direct

env:

- name: OEHCS_CONNECTURL

value: "<ip1>:6667,<ip2>:6667,..."

- name: TOPIC_NAME

value: "R1"

- name: NUM_PARTITIONS

value: "10"

- name: MESSAGE

value: "{'put here what you want'}"

name: oehcsnodeproducer-direct

TEST IT AND SEE WHAT HAPPENS

Create the deployment and after 10 minutes take a look to production messages ratio:
kubectl apply -f my.yaml
oehcs-scale1.png
More or less 400/second…
Scale the deployment and take a look to new production ratios:
kubectl scale deployment oehcsnodeproducer-direct --replicas=2
Around 8000 messages/second!
Now add 9 partitions to the topic and take a look to new ratios:
add partitions
With 2 pods running and 10 partitions we are producing around 10K messages per second! As you can see partitioning improves performance!
10partitions.png
Let’s double the number of pods and see new ratios:
kubectl scale deployment oehcsnodeproducer-direct --replicas=4
And now 18K/second!
18k
That’s all folks!
Enjoy 😉

Microservices and SODA: JSON Data Stored in an Oracle Database Accessed through​ REST or Java


Simple Oracle Document Access

SODA (Simple Oracle Document Access) is a non-SQL style of storing/retrieving JSON data in an Oracle database.

cold summer alcohol cocktail
Photo by Pixabay on Pexels.com

It’s so easy to work with SODA! Let’s get started.

First, enable ORDS schema in your database.

Second, deploy ORDS in K8s (you can also deploy ORDS standalone in your laptop or in a container if you don’t have one, here you can find an example).

Now let’s try it:

Create a collection:

A collection is a table that transparently stores JSON data for you. Execute the following instruction remarked in bold:

curl -i -X PUT http://130.61.67.88:8080/ords/hr/soda/latest/shopcartJSON

HTTP/1.1 201 Created

Date: Sat, 22 Dec 2018 19:15:26 GMT

X-Frame-Options: SAMEORIGIN

Cache-Control: private,must-revalidate,max-age=0

Location: http://130.61.67.88:8080/ords/hr/soda/latest/shopcartJSON/

Content-Length: 0

Insert JSON in the database:

Download this to a file called po.json and execute the following (please note the ID of the new doc created)

curl -X POST --data-binary @po.json -H "Content-Type: application/json" "http://130.61.67.88:8080/ords/hr/soda/latest/shopcartJSON/"

{"items":[{"id":"1C0C85A9B7084DAD88AC9155DC998578","etag":"D17DBB877A3B846EDB32789589F8CDB791634359D615376D938D167AAA3E1F46","lastModified":"2018-12-22T19:21:02.243934Z","created":"2018-12-22T19:21:02.243934Z"}],"hasMore":false,"count":1}

Just in case you haven’t noticed the magic of this, take a look at the following picture:

sodajson.png

Retrieve document by ID:

The JSON data is stored in a table. Now we can retrieve the document by the ID that was previously given as follows:

curl -X GET http://130.61.67.88:8080/ords/hr/soda/latest/shopcartJSON/1C0C85A9B7084DAD88AC9155DC998578

{"PONumber": 0,

"Reference": "ABANDA-20140803",

"Requestor": "Amit Banda",

...

"UPCCode": 85391181828},

"Quantity": 9.0}]

Search for documents:

So far, so good. But wait, there is more! Let’s search for the content of a JSON attribute, in this case, documents with PO number is 0:

curl -X POST --data-binary '{"PONumber":"0"}' -H "Content-Type: application/json" http://130.61.67.88:8080/ords/hr/soda/latest/shopcartJSON?action=query

{"items":[{"id":"1C0C85A9B7084DAD88AC9155DC998578","etag":"D17DBB877A3B846EDB32789589F8CDB791634359D615376D938D167AAA3E1F46","lastModified":"2018-12-22T19:21:02.243934Z","created":"2018-12-22T19:21:02.243934Z","value":{"PONumber": 0,...]}}],"hasMore":false,"count":1}

Bulk insert:

Now, let’s insert a bunch of JSON’s, first download this content and save as POlist.json and execute the following in bold:

curl -X POST --data-binary @POlist.json -H "Content-Type: application/json" http://130.61.67.88:8080/ords/hr/soda/latest/shopcartJSON?action=insert

{"items":[{"id":"0A6DF1E1A3A745D98DC3E3DD648732B2","...22T19:45:42.157916"}],"hasMore":false,"count":70,"itemsInserted":70}

As you can observe, we have inserted several objects with one shot.

Update a document:

Let’s update JSON with PO=0 we searched for before (take note of the ID):

curl -i -X PUT --data-binary '{"PONumber" : "99999"}' -H "Content-Type: application/json" http://130.61.67.88:8080/ords/hr/soda/latest/shopcartJSON/1C0C85A9B7084DAD88AC9155DC998578

HTTP/1.1 200 OK

Date: Sat, 22 Dec 2018 19:55:10 GMT

X-Frame-Options: SAMEORIGIN

Cache-Control: no-cache,must-revalidate,no-store,max-age=0

ETag: BC9497081F1E314EE19DD777F2ED7E8425F51C21E7EC1968CEA8110A684C3F05

Last-Modified: Sat, 22 Dec 2018 19:55:10 GMT

Location: http://130.61.67.88:8080/ords/hr/soda/latest/shopcartJSON/1C0C85A9B7084DAD88AC9155DC998578

Content-Length: 0

 

List records, delete records and so forth:

And so on… Take a look at the SODA REST, Java and whitepaper documentation.

Discover more features

Take a look at the documentation on how to create more complex queries and many other features.

Last, but not least, notice that the JSON object can be different in each row, there is no constraint that forces to a specific structure.

Microservices and SODA

As you probably have noticed, SODA is nice for CRUD operations with JSON data (such as the list of products in the shopping cart) in the database. In addition, you can utilize a relational model for the rest of your app. In brief, SODA is good for building microservices for the modern applications of the digital era.

And finally, by storing your data in an Oracle database, you get benefit of it’s inherent robustness, reliability, resilience, high availability, fault tolerant, distributed (sharding), in-memory, backup/recovery and more.

ordsarch

That’s all folks!

Enjoy 😉

 

 

Deploy Sample Application to Kubernetes Cluster (Oracle Cloud Container Clusters)


bandwidth close up computer connection
Photo by panumas nikhomkhai on Pexels.com

Gonna deploy in K8S a simple-sample HTML5/JS app made with JET Toolkit

  1. Grab a K8S cluster from here
  2. Follow setup instructions from here and here and here and/or check this post
  3. And finally execute this:
kubectl create deployment jetapp --image=docker.io/javiermugueta/myjetapp
kubectl expose deployment jetapp --type=LoadBalancer --port=8000

Enjoy 😉