Set Up an Ingress Controller in OKE Cluster

Step 1: setting up the nginx ingress controller

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

Step 2: exposing the ingress as a service of type LoadBalancer (as a public IP)

kubectl apply -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/cloud-generic.yaml

Step 3: Execute this command several times until the external-ip appears as non <pending>, grab the IP for later testing

kubectl get svc -n ingress-nginx

Step 4: a certificate for ssl termination

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"

kubectl create secret tls tls-secret --key tls.key --cert tls.crtopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"

Step 5

kubectl create -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/hello-world-ingress.yaml

Step 6: A typical hello world deployment with a pod in it and 3 replicas

kubectl create -f https://raw.githubusercontent.com/javiermugueta/rawcontent/master/ingress.yaml

Step 7: Test the url and see what happens…

That’s all folks! Hope it helps 🙂

Deploying a Coherence Cluster in Kubernetes

Coherence-Operator is a Kubernetes opeartor for deploying Oracle Coherence in k8s. Let’s see how to do it.

1 Clean previous setup intents:

helm del --purge sample-coherence
helm del --purge sample-coherence-operator
kubectl delete namespace sample-coherence-ns

2 Execute the following:

kubectl config set-context $(kubectl config current-context) --namespace=sample-coherence-ns

helm repo add coherence https://oracle.github.io/coherence-operator/charts

helm repo update

helm --debug install coherence/coherence-operator --name sample-coherence-operator --set "targetNamespaces={}" --set imagePullSecrets=sample-coherence-secret

helm ls

helm status sample-coherence-operator

3 Create a secret with your credentials to Oracle Container Registry:

kubectl create secret docker-registry oracle-container-registry-secret --docker-server=container-registry.oracle.com --docker-username='youruser' --docker-password='yourpasswd' --docker-email='youremail'

4 Install:

helm --debug install coherence/coherence --name sample-coherence --set imagePullSecrets=oracle-container-registry-secret

5 Proxy to a pod:

export POD_NAME=$(kubectl get pods --namespace sample-coherence-ns -l "app=coherence,release=sample-coherence,component=coherencePod" -o jsonpath="{.items[0].metadata.name}")

kubectl --namespace sample-coherence-ns port-forward $POD_NAME 20000:20000

6 Download and install Coherence Stand Alone

7 Download this and this

8 Build the client:

export COHERENCE_HOME=~/Oracle/Middleware/Oracle_Home/coherence
javac -cp .:${COHERENCE_HOME}/lib/coherence.jar HelloCoherence.javaexport COHERENCE_HOME=~/Oracle/Middleware/Oracle_Home/coherence

9 Test it:

java -cp .:${COHERENCE_HOME}/lib/coherence.jar -Dcoherence.cacheconfig=$PWD/example-client-config.xml HelloCoherence

2019-07-11 01:21:33.575/0.538 Oracle Coh...


Oracle Coherence Version Build 68243

2019-07-11 01:21:34.430/1.392 Oracle 

The value of the key is 7

That’s all, hope it helps 🙂

BucketNotEmpty – Bucket named ‘xxxx’ is not empty. Delete all objects first


Oracle OCI object storage “buckets” can’t be deleted from OCI dashboard unless they are empty… and no empty option menu exists at all (at least at the time of this post).

Anyway, you can do it using the CLI… Follow these steps:

If you don’t have OCI CLI installed follow this post

oci os object bulk-delete-ns <identitydomainname>  -bn <bucketname> 
oci os bucket delete --bucket-name <bucketname>

Hope it helps! 😉


Microservices Versioning and the “Database per Service Pattern” with Oracle Autonomous Database

The database per service pattern with an Oracle database can be accomplished easily with one instance of Autonomous Database per microservice, a very easy to use and maintain database (relational and JSON).


But not only is easy to use and maintain because you don’t have to deal with administration tasks or manage different technologies for persisting data, in fact you have a very easy mechanism for versioning your microservices in parallel with your data: just by cloning the instance.

You can clone both with the dashboard or with commands that can be easily put in your CD pipeline.


oci db autonomous-database create-from-clone --compartment-id <mycompid> --cpu-core-count 1 --data-storage-size-in-tbs 5 --admin-password <password> --source-id <dbsourceid> --clone-type FULL --db-name <newname>

As an example, in the range of  a 4-5 minutes you can clon a 5TB database ready to be connected to a new version of your microservice.



With Oracle Autonomous Database you avoid the versioning anti-pattern quickly and easily.


And  what about putting an API Platform in between the users and the microservices? Think about it!


That’s all folks!

Enjoy 😉

Related content:

Microservices with Oracle

Microservices and SODA: JSON Data Stored in an Oracle Database Accessed through​ REST or Java

Creating a Java Microservice with Helidon/Microservice Archetype Deployed in Kubernetes

The Increasingly Important Role of API Management in the Adoption of Integrated SaaS Solutions Together with the Onprem and Third-party Systems




Building Producer and Consumer Clients in go Language for Oracle Event Hub Cloud Service


Documentation here or watch following video:

When the cluster is created, go to details page and grab the connection url, which has the  format <broker1_ip>:6667,…,<brokern_ip>:6667



I’ve done the following (Mac):

git clone https://github.com/edenhill/librdkafka.git
cd librdkafka
./configure --prefix /usr/local
sudo make install (sudo make uninstall in case you want to remove the lib)

brew install librdkafka


I’ve downloaded this consumer and this producer samples. Create producer.go and consumer.go files and put the sample code in each file.

go build producer.go

go build consumer.go

Test your clients:

./producer <broker1_ip>:6667,...,<brokern_ip>:6667 sample

./consumer <broker1_ip>:6667,...,<brokern_ip>:6667 consumer_example sample



That’s all folks!

🙂 Enjoy


How to Secure User Access to Internet-Faced Cloud Solutions

internet screen security protection
Photo by Pixabay on Pexels.com

Suppose you are in charge for the security of a bank (CISO, CSO,…) that wants to control the access of users to the new ERP in the cloud that you are implementing.

There is a very simple and safe way to control which users access the environment.
Simply configure the Single Sign On of the cloud solution side so that the Service Provider is the ERP, and the Identity Provider is your on-premises identity management infrastructure.

How does it work?

When a user requests the url of the ERP, a login form hosted on the corporate servers appears, requesting the credentials. Since this form is deployed on-premises, only users connected to the corporate network (directly or via VPN) can access it.

Oracle Identity Cloud is always provisioned when you buy clod services and allows you to configure, among other things, the following:

  • federate users between the cloud and LDAP on premises without the need to store the password in the cloud
  • configure the SSO provided by the on-premises access system
  • configure several authentication factors (MFA) for administrators
  • define network perimeters (ranges of IP’s that can access the cloud)
  • define Risk Providers and Adaptative Security, which are mechanisms to evaluate the risk in user access actions
  • define Sign On policies, which are rules that apply in different way depending on the user roles (the more powered user the more strong rules to apply)
  • out of the box reports with login attempts and application access

In addition to the out of the box features that IDCS (Identity Cloud Service) mentioned above, Oracle provides CASB (Cloud Access Security Broker)

Enjoy 😉

Connecting Oracle Integration Cloud Service (aka OIC) to API Platform Cloud Service


Oracle Integration Cloud Service is an integration platform consisting in several tools such as:

  • Integration: a low code but extensible integration environment with many out of the box connector adapters and many more in the marketplace
  • Process: a Business Process Managament and Business Rules environment
  • Visual Builder: a visual low code development environment

API Management

Oracle API Platform Cloud Serviceis a API life cycle management solution. It allows you to plan, specify, implement, deploy, entitle, policy enforcement, publish, grant and observe (analytics) your API’s.

Putting things together

OIC has a setup that allows connecting it to an API Platform instance easily. Let’s have a look:

  1. Create an ICS and an API platform service instances. Grab the fdqn url of the API Platform instance portal
  2. Go to your ICS service instance portal and, in the [Integrations] menu, go to [Configuration] and then [API Platform]
  3. Enter the information of the existing API Platform instance you created before and click on [Save]


Config is ready, and now what else?

Go to the lists of integrations in ICS and select one that is finished and ready to activate and click on the [Activate] button or menu.


As you can see, the connection between ICS and API Platform is working, and we are proposed to choose what to do. We can activate the integration (only in ICS) or activate in ICS and publish to API Platform.

Let’s clink on [Activate and Publish…]


You have two options, create a new API or add to an existing one. Leave unchecked the Deploy and Publish options. Once you have finished entering the values, click [Create]. Integration is activated and a new option appears in the hamburger menu:


Now, go to the API Platform portal instance and notice the new API created:



We haven’t deployed the API to a gateway, therefore it is not yet ready for consumption thought API Platform, that is something we’ll explain in coming post. For the moment let’s keep in mind that the integration implemented in ICS has been deployed to API platform and is ready for start applying several API management best practices, such as applying security, traffic, interface or routing policies.


That’s all folks!

Enjoy 😉