Oracle SaaS Stories | Exporting Data to Cloud Object Storage with BI Cloud Extract Tool

Business Intelligence Cloud Extract is a tool that you can use to export data from Oracle SaaS. The tool is under /biacm service url, such as

As the time of this writing the supported external storage repos are Oracle Cloud Object Storage “Classic” and UCM (Universal Content Manager.

Fist Step: Configure cloud storage

Second Step: Create a connection in the tool

Click the “+” button:

Provide cloud storage information:

Host: <identitydomain>.<region>

Service Name: storage-<identitydomain>

Container: Click the reload button and select a container from the listbox

Third Step: Create an schedule

Click the “+” button and put appropriate values:

Select one or more data stores:

Provide storage connection and the rest of the information and click [Save]

Wait for the job to execute:

Go to object storage console and see what happened:

The zip files contain the data, the JSON files contains details of the job run.

That’s all folks, hope it helps! 🙂

Oracle Loyalty Cloud | REST API | Get Service Requests of a Member + Update SR

Photo by Lukáš Kováčik on

The method for geting this info is under the Engagement REST API here.

The call includes a parameter which is filtering by the Member Name as follows:

curl -X GET -k -H 'Authorization: Basic whatever' -i 'https://serverdomain/crmRestApi/resources/"menbernamehere"&onlyData=true'


curl -X GET -k -H 'Authorization: Basic am9ob........DM3Mzg=' -i '"ad pepelu"&onlyData=true'

   "items" : [ {
     "SrId" : 300000183643204,
     "SrNumber" : "SR105156",
     "Title" : "bbbbb",
     "ProblemDescription" : null,
     "SeverityCdMeaning" : "High",
     "SeverityCd" : "ORA_SVC_SEV1",
     "AssigneeResourceId" : 300000129858698,
     "PrimaryContactPartyId" : 300000183643142,
     "PrimaryContactPartyUniqueName" : "ad pepelu",
     "PrimaryContactPartyName" : "ad pepelu",
     "ExtnsrMgmtFuseCreateLayout_InstallBase_1554923756871Expr" : "false"
   } ],
   "count" : 1,
   "hasMore" : false,
   "limit" : 25,
   "offset" : 0,
     "name" : "serviceRequests",
     "kind" : "collection"
   } ]

Following same API, here an example of how to update an existing SR:

curl -X PATCH -k -H 'Content-Type: application/' -H 'Authorization: Basic am9ob...3Mzg=' -i 'https://serverdomain/crmRestApi/resources/' --data '{
"ProblemDescription" : "Me duele la cara de ser tan feo"}'
   "SrId" : 300000183643204,
   "SrNumber" : "SR105156",
   "Title" : "bbbbb",
   "ProblemDescription" : "Me duele la cara de ser tan feo",
   "links" : [ {
     "rel" : "self",
     "name" : "resourceMembers",
     "kind" : "collection"
   } ]

Hope it helps! 🙂

WebLogic Kubernetes Operator: Deploying a Java App in a WebLogic Domain on Oracle Kubernetes Engine (OKE) in 30 Minutes

WebLogic Kubernetes Operator provides a way of running WLS domains in a k8s cluster.

For this post we are depicting the steps of the tutorial you can find in the documentation here. So let’s get started!

What you need:

  • a k8s cluster
  • kubectl
  • maven
  • git
  • docker
  • 60 minutes
git clone

docker login

docker pull oracle/weblogic-kubernetes-operator:2.2.0

docker pull traefik:1.7.6

For the next step, if you don’t have a user, goto and register yourself

docker login 

docker pull

K8s uses role based access control (RBAC):

cat <<EOF | kubectl apply -f -
 kind: ClusterRoleBinding
   name: helm-user-cluster-admin-role
   kind: ClusterRole
   name: cluster-admin
 kind: ServiceAccount
 name: default
 namespace: kube-system

Traefik is a router:

helm install stable/traefik \
   --name traefik-operator \
   --namespace traefik \
   --values kubernetes/samples/charts/traefik/values.yaml  \
   --set "kubernetes.namespaces={traefik}" \
cat <<EOF < values.yaml
 serviceType: NodePort
     http: "30305"
     https: "30443"
   enabled: true
   enabled: true
   enabled: true
   #enforced: true 
   #upstream: true
   #insecureSkipVerify: false
   tlsMinVersion: VersionTLS12
helm install stable/traefik --name traefik-operator --namespace traefik --values values.yaml  --set "kubernetes.namespaces={traefik}" --wait

Namespace for the operator:

kubectl create namespace sample-weblogic-operator-ns

kubectl create serviceaccount -n sample-weblogic-operator-ns sample-weblogic-operator-sa
cd weblogic-kubernetes-operator/

helm install kubernetes/charts/weblogic-operator \
   --name sample-weblogic-operator \
   --namespace sample-weblogic-operator-ns \
   --set image=oracle/weblogic-kubernetes-operator:2.2.0 \
   --set serviceAccount=sample-weblogic-operator-sa \
   --set "domainNamespaces={}" \
kubectl create namespace sample-domain1-ns

helm upgrade \
   --reuse-values \
   --set "domainNamespaces={sample-domain1-ns}" \
   --wait \
   sample-weblogic-operator \
helm upgrade \
   --reuse-values \
   --set "kubernetes.namespaces={traefik,sample-domain1-ns}" \
   --wait \
   traefik-operator \

Creating the WLS domain image:

kubernetes/samples/scripts/create-weblogic-domain-credentials/ \
   -u weblogic -p welcome1 -n sample-domain1-ns -d sample-domain1

Tag the docker image created and push to a registry:

docker images

docker tag javiermugueta/weblogic:

docker push javiermugueta/weblogic:

NOTE: Remember to make private this image in the registry!!! As a recommended option, please follow the steps here to push to the private registry offered by Oracle.

Now let’s make a copy of the yaml file with properties to change and put the appropriate values:

cp kubernetes/samples/scripts/create-weblogic-domain/domain-home-in-image/create-domain-inputs.yaml .

mv create-domain-inputs.yaml mycreate-domain-inputs.yaml

vi mycreate-domain-inputs.yaml

(change values in lines #16, #57, #65, #70, #104, #107 appropriately) Here the one I utilised just in case it helps

And now let’s create the domain with the image:

cd kubernetes/samples/scripts/create-weblogic-domain/domain-home-in-image

./ -i ~/Downloads/weblogic-kubernetes-operator/mycreate-domain-inputs.yaml -o ~/Downloads/weblogic-kubernetes-operator/output -u weblogic -p welcome1 -e

Verify that everything ig working!

kubectl get po -ns sample-domain1-ns

kubectl get svc -ns sample-domain1-ns

Change the type of the cluster and adminserver services to LoadBalancer:

kubectl edit svc/sample-domain1-cluster-cluster-1 -n sample-domain1-ns

kubectl edit svc/sample-domain1-admin-server-external -n sample-domain1-ns
Use vi commands

Verify and write down the public IP’s of the AdminServer external service and the cluster:

kubectl get svc -ns sample-domain1-ns

Create a simple java app and package it:

mvn archetype:generate -DartifactId=java-web-project -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

mnv package

Open a browser, log in WLS AdminServer console and deploy your app (use the public IP of the AdminsServer service):

Open a new browser tab and test the app (use the public IP of the WLS cluster service):

That’s all folks, hope it helps!! 🙂

Shared Disk Seen by Pods Deployed in two Independent OKE Clusters across two Cloud Regions | Remote Network Peering

In this episode we are creating 2 independent OKE clusters, one in Frankfurt and the other in Phoenix, and then we will create a File System in Frankfurt (kind of NFS server) that will act as repository for a shared persistent volume reachable by all the pods of a deployment deployed to both clusters.

Remote Peering

Oracle Cloud Infrastructure networking provides “Remote Peering”, which allows the connection between networks (Virtual Cloud Networks -VCN-) in two different cloud regions.

Peering the 2 VCN’s

Let’s create one VCN in Frankfurt and other in another region, Phoenix in my case.

IMPORTANT: VCN CIDR’s must not overlap!

Now create a DRG in Frankfurt, then create a Remote Peering Connection (RPC):

Do the same in Phoenix and grab the OCID of the new RPC created, we’ll need it in the next step:

Come back to the RPC in Frankfurt, click [Establish Connection], select the region and paste the OCID of the remote RPC:

After a while you should be the status PEERED in both RPC’s:

Now, attach the VCN to the DRG in both sides:

So far, sogood! The 2 VCN’s are peered, now let’s manage how the networks can reach each other, how? – by routing them! We are going to create the routes with OCI CLI (because at the moment of this writing I wasn’t able to create them with the GUI). To do it grab previously the following info from both regions:

  • compartment-OCID
  • vcn-OCID
  • drg-OCIID

Now let’s create a route from fra to phx:

oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"","networkEntityId":"[yourdrgocid]"}]'

And now from phx to fra:

oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"","networkEntityId":"[yourdrgocid]"}]' --region us-phoenix-1

Please note the CIDR block parameter I’ve put (remarked in bold), which in this case is the CIDR of the whole VCN, because we want to route to all the subnets on each side. The routes created look like this:

Now we must modify the routes created on each region and add a rule for the nodes in the private subnet so that they can reach the internet via a NAT gateway, because if not, k8s can’t reach the docker container repo I’m using (you must create one NAT gateway on each side just in case you have not already done it):

Now we must assign to each one of the private subnets on each VCN the route tables created on each region:

Now create a K8S cluster on each region (use the custom option because you must select the VCN’s you have created previously).

Now follow this post to create the shared file system in Frankfurt.

One more thing, configure security list to allow traffic, NFS ports are:

UDP: 111,1039,1047,1048,2049
TCP: 111,1039,1047,1048,2049

Soo faaar sooo goood! We have the VCN networks peered, DRG’s created, VCN’s attached to DRG’s, routes created, NFS traffic allowed, storage ready and k8s clustered created!

Finally deploy this to both k8s clusters. NOTE: modify the yaml with the specific values of IP and export of your own File System:

kubectl apply -f k8spod2nfs.yaml

Now ssh to one of the pods in the Phoenix cluster and verify you can see the shared content. Then modify the index.html file and dump the content of the file. Finally get the public IP of the service created.

MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
k8spod2nfs-6c6665479f-b6q8j   1/1     Running   0          1m
k8spod2nfs-6c6665479f-md5s5   1/1     Running   0          1m
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6c6665479f-b6q8j bash
root@k8spod2nfs-6c6665479f-b6q8j:/# cd /usr/share/nginx/html/
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# ls
file.txt  index.html  index.html.1
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# echo hi >> index.html
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# cat index.html
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# exit
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
k8spod2nfs   LoadBalancer   80:31728/TCP   8m
kubernetes   ClusterIP                 443/TCP        48m

Open a browser and put the public IP of the service:

Now get the public IP of the service created in Frankfurt, open a browser and see what happens:

It shows same content, awesome!!

And last, just in case you don’t trust, change again the content of index.html within a pod in the Frankfurt side:

MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6f48c6464f-2447d bash
root@k8spod2nfs-6f48c6464f-2447d:/# cd /usr/share/nginx/html/
root@k8spod2nfs-6f48c6464f-2447d:/usr/share/nginx/html# echo bye >> index.html

It seems is working fine, haha!!

That’s all folks! 🙂

Scheduling with Developer Cloud Service (DevCS) the Start/Stop Sequence of an Oracle Analytics Cloud (OAC) Instance

Oracle Developer Cloud Service (DevCS) is a CI/CD environment provided by Oracle for cloud customers. It is almost free, you only pay for the storage and the compute utilised when jobs are run.

Oracle Analytics Cloud (OAC) is a powerful and proactive reporting tool.


Builds are the well known tasks typical in Jenkins:

Start OAC Instance Job

Create a new build job, go to Configure->Steps, from the [Add Step] dropdown listbox select “PSMCli” and “Unix Shell”.

In the PSMCli step provide user, password, identity-domain id (the id, not the name), region and output format.

In the Unix Shell step put:

psm AUTOANALYTICSINST start-service -s [instancename]

Now click on the “Settings” icon (the gears), go to Triggers tab and configure the scheduling utilising cronjob format:

Stop OAC Instance Job

Do same steps than above, except for the command systax which is:

 psm AUTOANALYTICSINST start-service -s [instancename]

And if you need to execute jobs depending ones on the others, you can utilise pipelines!


Create a build step, type one of the following and see the output (example):

psm help
psm [servicetype] help

Hope it helps! 🙂

Load Balancig, High Availability and Fail-Over of a Micro-Service Deployed in two Separated Kubernetes Clusters: one running in Oracle Kubernetes Engine (OKE) and the other in Google Kubernetes Engine (GKE)

Oracle Cloud Edge Services

Oracle Cloud Infrastructure provides Edge Services, is a group of services related with DNS, Health Checks, Traffic Management and WAF (Web Application Firewall).

In this episode we are utilising DNS Zone Management, Traffic Management Steering Policies and Health Checks for load balancing and fail-over of a micro-service running in two different Kubernetes clusters, in two different regions and distinct cloud providers, giving a robust solution that accomplishes a very powerful load balanced, active-active and disaster recovery topology.

Deploying the micro-service

Deploy the following to two different k8s clusters, such as OKE in two distinct regions or OKE and GKE. As OKE and GKE are petty much identical, we can use kubectl and Kubernetes Dashboard in both of them as we prefer:

k8s deployment in OKE visualised with Kubernetes kubectl
k8s deployment in OKE visualised with Kubernetes dashboard

It is a very simple service that greets you and says where is it running.

Greetings from OKE
Greetings from GKE

Configuring DNS

For this part of the setup we need a FQDN registered, we are using, a name registered by myself.

Let’s create domain entries in OCI. Create a DNS zone in OCI as follows:

Now, let’s grab the DNS servers and go to our Registrar and change the DNS’s configuration so that they point to Oracle DNS’s:

Verify the change:

Configuring Health Checks

Let’s create a Health Check that we’ll use later in the traffic management. Health checks are performed external to OCI from a list of vantage points executed in Azure, Google or AWS, select your preferred choice.

Configuring Traffic Management Steering Policies

Let’s create a traffic management policy as follows:

Testing it all

Ok, we have all the tasks already done, let’s test it!

Delete the deployment in OKE:

Go to the Traffic policy and verify that the OKE endpoint is unhealthy:

Go to your browser and request, as you can see the service is retrieved from GKE:

Redeploy in OKE again:

As you can see, the OKE service is running well again:

Now let’s delete the deployment in GKE:

Now the greeting is retrieved again form OKE:

And that’s all folks, hope it helps! 🙂

Integrating DevCS Notifications with Slack Using WebHooks

One interesting thing regarding CI and DevOps is the ability to be notified when things happen without the need to log in a web app every hour and see what happened.

Oracle Developer Cloud Service (DevCS -a CI/DevOps tool from Oracle Cloud-) can be configured to send notifications to several channels, one of them is slack:

Let’s have a look how to configure it.

Slack Side Configuration

Ask our slack administrators to allow you install “Incoming Webhooks” app

Once you are allowed, install and configure “Incoming Webhooks”, selecting the slack channel you want the notifications to be sent to, name icon, attachments and the like. Finally, grab the Webhook URL for later.

DevCS Side Configuration

Go to DecCS, select a project and go to Administration/Webhooks

Create a new Webhook. Put in the field URL the url created in the slack configuration side and subscribe to the kind notifications you are interested in and clink on [Save]:

Click on [Test] button and verify a test message reaches the slack channel:

Now launch a build, create/edit an announcement or do whatever task that generates notifications and verify the notification reaches the channel:

That’s all, hope it helps! 🙂