Tagged OKE

Set Up an Ingress Controller in OKE Cluster


Step 1: setting up the nginx ingress controller

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

Step 2: exposing the ingress as a service of type LoadBalancer (as a public IP)

kubectl apply -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/cloud-generic.yaml

Step 3: Execute this command several times until the external-ip appears as non <pending>, grab the IP for later testing

kubectl get svc -n ingress-nginx

Step 4: a certificate for ssl termination

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"


kubectl create secret tls tls-secret --key tls.key --cert tls.crtopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"

Step 5

kubectl create -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/hello-world-ingress.yaml

Step 6: A typical hello world deployment with a pod in it and 3 replicas

kubectl create -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/ingress.yaml

Step 7: Test the url and see what happens…

That’s all folks! Hope it helps 🙂

Deploying a Coherence Cluster in Kubernetes


Coherence-Operator is a Kubernetes opeartor for deploying Oracle Coherence in k8s. Let’s see how to do it.

1 Clean previous setup intents:

helm del --purge sample-coherence
helm del --purge sample-coherence-operator
kubectl delete namespace sample-coherence-ns

2 Execute the following:

kubectl config set-context $(kubectl config current-context) --namespace=sample-coherence-ns

helm repo add coherence https://oracle.github.io/coherence-operator/charts

helm repo update

helm --debug install coherence/coherence-operator --name sample-coherence-operator --set "targetNamespaces={}" --set imagePullSecrets=sample-coherence-secret

helm ls

helm status sample-coherence-operator

3 Create a secret with your credentials to Oracle Container Registry:

kubectl create secret docker-registry oracle-container-registry-secret --docker-server=container-registry.oracle.com --docker-username='youruser' --docker-password='yourpasswd' --docker-email='youremail'

4 Install:

helm --debug install coherence/coherence --name sample-coherence --set imagePullSecrets=oracle-container-registry-secret

5 Proxy to a pod:

export POD_NAME=$(kubectl get pods --namespace sample-coherence-ns -l "app=coherence,release=sample-coherence,component=coherencePod" -o jsonpath="{.items[0].metadata.name}")

kubectl --namespace sample-coherence-ns port-forward $POD_NAME 20000:20000

6 Download and install Coherence Stand Alone

7 Download this and this

8 Build the client:

export COHERENCE_HOME=~/Oracle/Middleware/Oracle_Home/coherence
javac -cp .:${COHERENCE_HOME}/lib/coherence.jar HelloCoherence.javaexport COHERENCE_HOME=~/Oracle/Middleware/Oracle_Home/coherence

9 Test it:

java -cp .:${COHERENCE_HOME}/lib/coherence.jar -Dcoherence.cacheconfig=$PWD/example-client-config.xml HelloCoherence

2019-07-11 01:21:33.575/0.538 Oracle Coh...

…

Oracle Coherence Version 12.2.1.3.0 Build 68243

2019-07-11 01:21:34.430/1.392 Oracle 

The value of the key is 7

That’s all, hope it helps 🙂

WebLogic Kubernetes Operator: Deploying a Java App in a WebLogic Domain on Oracle Kubernetes Engine (OKE) in 30 Minutes


WebLogic Kubernetes Operator provides a way of running WLS domains in a k8s cluster.

For this post we are depicting the steps of the tutorial you can find in the documentation here. So let’s get started!

What you need:

  • a k8s cluster
  • kubectl
  • maven
  • git
  • docker
  • 60 minutes
git clone https://github.com/oracle/weblogic-kubernetes-operator

docker login

docker pull oracle/weblogic-kubernetes-operator:2.2.0

docker pull traefik:1.7.6

For the next step, if you don’t have a user, goto https://container-registry.oracle.com and register yourself

docker login container-registry.oracle.com 

docker pull container-registry.oracle.com/middleware/weblogic:12.2.1.3

K8s uses role based access control (RBAC):

cat <<EOF | kubectl apply -f -
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: helm-user-cluster-admin-role
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
 subjects:
 kind: ServiceAccount
 name: default
 namespace: kube-system
 EOF

Traefik is a router:

helm install stable/traefik \
   --name traefik-operator \
   --namespace traefik \
   --values kubernetes/samples/charts/traefik/values.yaml  \
   --set "kubernetes.namespaces={traefik}" \
   --wait
cat <<EOF < values.yaml
 serviceType: NodePort
 service:
   nodePorts:
     http: "30305"
     https: "30443"
 dashboard:
   enabled: true
   domain: traefik.example.com
 rbac:
   enabled: true
 ssl:
   enabled: true
   #enforced: true 
   #upstream: true
   #insecureSkipVerify: false
   tlsMinVersion: VersionTLS12
 EOF
helm install stable/traefik --name traefik-operator --namespace traefik --values values.yaml  --set "kubernetes.namespaces={traefik}" --wait

Namespace for the operator:

kubectl create namespace sample-weblogic-operator-ns

kubectl create serviceaccount -n sample-weblogic-operator-ns sample-weblogic-operator-sa
cd weblogic-kubernetes-operator/

helm install kubernetes/charts/weblogic-operator \
   --name sample-weblogic-operator \
   --namespace sample-weblogic-operator-ns \
   --set image=oracle/weblogic-kubernetes-operator:2.2.0 \
   --set serviceAccount=sample-weblogic-operator-sa \
   --set "domainNamespaces={}" \
   --wait
kubectl create namespace sample-domain1-ns

helm upgrade \
   --reuse-values \
   --set "domainNamespaces={sample-domain1-ns}" \
   --wait \
   sample-weblogic-operator \
   kubernetes/charts/weblogic-operator
 
helm upgrade \
   --reuse-values \
   --set "kubernetes.namespaces={traefik,sample-domain1-ns}" \
   --wait \
   traefik-operator \
   stable/traefik

Creating the WLS domain image:

kubernetes/samples/scripts/create-weblogic-domain-credentials/create-weblogic-credentials.sh \
   -u weblogic -p welcome1 -n sample-domain1-ns -d sample-domain1

Tag the docker image created and push to a registry:

docker images

docker tag container-registry.oracle.com/middleware/weblogic:12.2.1.3 javiermugueta/weblogic:12.2.1.3

docker push javiermugueta/weblogic:12.2.1.3

NOTE: Remember to make private this image in the registry!!! As a recommended option, please follow the steps here to push to the private registry offered by Oracle.

Now let’s make a copy of the yaml file with properties to change and put the appropriate values:

cp kubernetes/samples/scripts/create-weblogic-domain/domain-home-in-image/create-domain-inputs.yaml .

mv create-domain-inputs.yaml mycreate-domain-inputs.yaml

vi mycreate-domain-inputs.yaml

(change values in lines #16, #57, #65, #70, #104, #107 appropriately) Here the one I utilised just in case it helps

And now let’s create the domain with the image:

cd kubernetes/samples/scripts/create-weblogic-domain/domain-home-in-image

./create-domain.sh -i ~/Downloads/weblogic-kubernetes-operator/mycreate-domain-inputs.yaml -o ~/Downloads/weblogic-kubernetes-operator/output -u weblogic -p welcome1 -e

Verify that everything ig working!

kubectl get po -ns sample-domain1-ns

kubectl get svc -ns sample-domain1-ns

Change the type of the cluster and adminserver services to LoadBalancer:

kubectl edit svc/sample-domain1-cluster-cluster-1 -n sample-domain1-ns

kubectl edit svc/sample-domain1-admin-server-external -n sample-domain1-ns
Use vi commands

Verify and write down the public IP’s of the AdminServer external service and the cluster:

kubectl get svc -ns sample-domain1-ns

Create a simple java app and package it:

mvn archetype:generate -DgroupId=javiermugueta.blog -DartifactId=java-web-project -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

mnv package

Open a browser, log in WLS AdminServer console and deploy your app (use the public IP of the AdminsServer service):

Open a new browser tab and test the app (use the public IP of the WLS cluster service):

That’s all folks, hope it helps!! 🙂

Deploying an Oracle Database with Persistence Enabled in Oracle Kubernetes Engine in Ten Minutes or Less


In a previous post I explained how to create the same thing using an image published in docker registry under my user. Well… that post is not working anymore because I deleted the image for some reasons.

The method exposed here is better because the deployment file pulls the official image published here.

Therefore you only have to create a user, accept the licence agreement (just in case you acknowledge it) and follow the steps explained here.

Hope it helps! 😉

 

Configuring Grafana for Oracle Kubernetes Engine


apple devices books business coffee
Photo by Serpstat on Pexels.com

INSTALL GRAFANA LOCALLY

brew install grafana

http://localhost:3000

INSTALL K8S PLUGIN FOR GRAFANA

grafana-cli plugins install grafana-kubernetes-app

brew services restart grafana

INSTALL PROMETHEUS ON OKE

helm install --name my-prometheus stable/prometheus

export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")

kubectl --namespace default port-forward $POD_NAME 9090

CONFIGURE PROMETHEUS DATASOURCE

prom

CONFIGURE K8S DATASOURCE

k8s.png

USE IT

This slideshow requires JavaScript.

That’s all!

Enjoy 😉

Containerizing​ kubectl for working with OKE (Oracle Kubernetes Engine)


kubectl is one of the command line interfaces for managing k8s. In this post, we are containerizing kubectl for easy utilization across different environments.

Because kubectl in OKE is related with OCI CLI, we are setting up the kubectl tool by adding it to ocloudshell container we created in this post.

STEP 1

Configure OCI CLI as mentioned in the referenced post

STEP 2

Download kubectl and put it in the ocloudshell directory
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl

STEP 3

Configure kubectl for OKE following instructions you can find in the main page of each K8s cluster created in the OCI dashboard UI:

kubeconfig

STEP 4

Copy the .kube directory created in your home directory to the ocloudshell directory

STEP 5

Create Dockerfile

FROM store/oracle/serverjre:8
ENV LC_ALL=en_US.utf8 \
LANG=en_US.utf8
ARG OPC_CLI_VERSION=18.1.2
ENV OPC_CLI_PKG=opc-cli-$OPC_CLI_VERSION.zip
WORKDIR /ocloudshell/
RUN curl -o /etc/yum.repos.d/public-yum-ol7.repo http://yum.oracle.com/public-yum-ol7.repo \
&& yum-config-manager --enable ol7_developer_EPEL \
&& yum-config-manager --enable ol7_developer \
&& yum -y install unzip python-oci-cli \
&& rm -rf /var/cache/yum/*
WORKDIR /root
ADD .oci/ .oci/
RUN chmod 400 .oci/config
RUN chmod 400 .oci/oci_api_key.pem
ADD .kube/ .kube/
ADD kubectl /usr/local/bin/kubectl
RUN chmod +x /usr/local/bin/kubectl
CMD ["/bin/bash"]

STEP 6

Create and push the container to a private repo*
docker build -t yourrepo/ocloudshell .

docker push yourrepo/ocloudshell
(*) Remember, don’t push to public repo or you are putting in serious risk your environment!!

TEST IT

So far, so good. Let’s test the container executing a command, for instance:

docker run -it javiermugueta/ocloudshell kubectl get po

That’s all folks!

Enjoy 🙂

Creating a Java Microservice with Helidon/Microservice Archetype Deployed in Kubernetes


helidon

With Helidon you can create Java microservices easily. In this blog, we are creating/exposing a REST service that gets a JSON document stored in an Oracle database and retrieves it to the requestor. For retrieving the JSON document from the database we are using ORDS and SODA but you can use JDBC as well, we’ll show it shortly in another post.

First, let’s create the project with the available archetype:

mvn archetype:generate -DinteractiveMode=false -DarchetypeGroupId=io.helidon.archetypes -DarchetypeArtifactId=helidon-quickstart-se -DarchetypeVersion=0.10.5 -DgroupId=io.helidon.examples -DartifactId=quickstart-se -Dpackage=io.helidon.examples.quickstart.se

Modify the GreetService class for calling a backend service and the MainTest class removing the tests, you can find how it looks like here on GitHub.

Package the app:

mvn clean package

Now, start the app

java -jar target/quickstart-se.jar

And test the app locally, the app starts listening in 0.0.0.0:8080:

http://localhost:8080/greet

Now, let’s containerize the app with the Dockerfile included in the project:

docker build -t quickstart-se target

Run the container and test again:

docker run --rm -p 8080:8080 quickstart-se:latest

http://localhost:8080/greet

Now let’s deploy to Kubernetes:

First, we tag the container and then push it to the registry:

docker tag quickstart-se javiermugueta/quickstart-se

docker push javiermugueta/quickstart-se

Now let’s modify the deployment yaml to create a LoadBalancer and for pulling the image previously pushed in the registry:

... 
labels:

app: ${project.artifactId}

spec:

type: LoadBalancer

selector:
...

spec:

containers:

- name: ${proje
ct.artifactId}

image: javiermugueta/quickstart-se

imagePullPolicy: IfNotPresent

ports:

...

 

Let’s deploy the app to k8s:
kubectl create -f target/app.yaml
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
quickstart-se LoadBalancer 10.96.56.43 1xy.x1.x5.x24 8080:32243/TCP 2m
Take note of the public IP and test again:
javamicroservice
That’s all folks!!
Enjoy;-)