From June 2019

Oracle SaaS Stories | Exporting Data to Cloud Object Storage with BI Cloud Extract Tool


Business Intelligence Cloud Extract is a tool that you can use to export data from Oracle SaaS. The tool is under /biacm service url, such as https://xxxx-envn.yy.dc.oraclezzz.com

As the time of this writing the supported external storage repos are Oracle Cloud Object Storage “Classic” and UCM (Universal Content Manager.

Fist Step: Configure cloud storage

Second Step: Create a connection in the tool

Click the “+” button:

Provide cloud storage information:

Host: <identitydomain>.<region>.storage.oraclecloud.com

Service Name: storage-<identitydomain>

Container: Click the reload button and select a container from the listbox

Third Step: Create an schedule

Click the “+” button and put appropriate values:

Select one or more data stores:

Provide storage connection and the rest of the information and click [Save]

Wait for the job to execute:

Go to object storage console and see what happened:

The zip files contain the data, the JSON files contains details of the job run.

That’s all folks, hope it helps! 🙂

Oracle Loyalty Cloud | REST API | Get Service Requests of a Member + Update SR


Photo by Lukáš Kováčik on Pexels.com

The method for geting this info is under the Engagement REST API here.

The call includes a parameter which is filtering by the Member Name as follows:

curl -X GET -k -H 'Authorization: Basic whatever' -i 'https://serverdomain/crmRestApi/resources/11.13.18.05/serviceRequests?q=LoyMemberName="menbernamehere"&onlyData=true'

Example:

curl -X GET -k -H 'Authorization: Basic am9ob........DM3Mzg=' -i 'https://xxxx-xxxx-xx-ext.oracledemos.com/crmRestApi/resources/11.13.18.05/serviceRequests?q=LoyMemberName="ad pepelu"&onlyData=true'

{
   "items" : [ {
     "SrId" : 300000183643204,
     "SrNumber" : "SR105156",
     "Title" : "bbbbb",
     "ProblemDescription" : null,
     "SeverityCdMeaning" : "High",
     "SeverityCd" : "ORA_SVC_SEV1",
     "AssigneeResourceId" : 300000129858698,
     ...
     "PrimaryContactPartyId" : 300000183643142,
     "PrimaryContactPartyUniqueName" : "ad pepelu",
     "PrimaryContactPartyName" : "ad pepelu",
     ...
     "ExtnsrMgmtFuseCreateLayout_InstallBase_1554923756871Expr" : "false"
   } ],
   "count" : 1,
   "hasMore" : false,
   "limit" : 25,
   "offset" : 0,
   ...
     "name" : "serviceRequests",
     "kind" : "collection"
   } ]

Following same API, here an example of how to update an existing SR:

curl -X PATCH -k -H 'Content-Type: application/vnd.oracle.adf.resourceitem+json' -H 'Authorization: Basic am9ob...3Mzg=' -i 'https://serverdomain/crmRestApi/resources/11.13.18.05/serviceRequests/SR105156' --data '{
"ProblemDescription" : "Me duele la cara de ser tan feo"}'
...
{
   "SrId" : 300000183643204,
   "SrNumber" : "SR105156",
   "Title" : "bbbbb",
   "ProblemDescription" : "Me duele la cara de ser tan feo",
   ...
   "links" : [ {
     "rel" : "self",
...
.com:443/crmRestApi/resources/11.13.18.05/serviceRequests/SR105156/child/resourceMembers",
     "name" : "resourceMembers",
     "kind" : "collection"
   } ]

Hope it helps! 🙂

WebLogic Kubernetes Operator: Deploying a Java App in a WebLogic Domain on Oracle Kubernetes Engine (OKE) in 30 Minutes


WebLogic Kubernetes Operator provides a way of running WLS domains in a k8s cluster.

For this post we are depicting the steps of the tutorial you can find in the documentation here. So let’s get started!

What you need:

  • a k8s cluster
  • kubectl
  • maven
  • git
  • docker
  • 60 minutes
git clone https://github.com/oracle/weblogic-kubernetes-operator

docker login

docker pull oracle/weblogic-kubernetes-operator:2.2.0

docker pull traefik:1.7.6

For the next step, if you don’t have a user, goto https://container-registry.oracle.com and register yourself

docker login container-registry.oracle.com 

docker pull container-registry.oracle.com/middleware/weblogic:12.2.1.3

K8s uses role based access control (RBAC):

cat <<EOF | kubectl apply -f -
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: helm-user-cluster-admin-role
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
 subjects:
 kind: ServiceAccount
 name: default
 namespace: kube-system
 EOF

Traefik is a router:

helm install stable/traefik \
   --name traefik-operator \
   --namespace traefik \
   --values kubernetes/samples/charts/traefik/values.yaml  \
   --set "kubernetes.namespaces={traefik}" \
   --wait
cat <<EOF < values.yaml
 serviceType: NodePort
 service:
   nodePorts:
     http: "30305"
     https: "30443"
 dashboard:
   enabled: true
   domain: traefik.example.com
 rbac:
   enabled: true
 ssl:
   enabled: true
   #enforced: true 
   #upstream: true
   #insecureSkipVerify: false
   tlsMinVersion: VersionTLS12
 EOF
helm install stable/traefik --name traefik-operator --namespace traefik --values values.yaml  --set "kubernetes.namespaces={traefik}" --wait

Namespace for the operator:

kubectl create namespace sample-weblogic-operator-ns

kubectl create serviceaccount -n sample-weblogic-operator-ns sample-weblogic-operator-sa
cd weblogic-kubernetes-operator/

helm install kubernetes/charts/weblogic-operator \
   --name sample-weblogic-operator \
   --namespace sample-weblogic-operator-ns \
   --set image=oracle/weblogic-kubernetes-operator:2.2.0 \
   --set serviceAccount=sample-weblogic-operator-sa \
   --set "domainNamespaces={}" \
   --wait
kubectl create namespace sample-domain1-ns

helm upgrade \
   --reuse-values \
   --set "domainNamespaces={sample-domain1-ns}" \
   --wait \
   sample-weblogic-operator \
   kubernetes/charts/weblogic-operator
 
helm upgrade \
   --reuse-values \
   --set "kubernetes.namespaces={traefik,sample-domain1-ns}" \
   --wait \
   traefik-operator \
   stable/traefik

Creating the WLS domain image:

kubernetes/samples/scripts/create-weblogic-domain-credentials/create-weblogic-credentials.sh \
   -u weblogic -p welcome1 -n sample-domain1-ns -d sample-domain1

Tag the docker image created and push to a registry:

docker images

docker tag container-registry.oracle.com/middleware/weblogic:12.2.1.3 javiermugueta/weblogic:12.2.1.3

docker push javiermugueta/weblogic:12.2.1.3

NOTE: Remember to make private this image in the registry!!! As a recommended option, please follow the steps here to push to the private registry offered by Oracle.

Now let’s make a copy of the yaml file with properties to change and put the appropriate values:

cp kubernetes/samples/scripts/create-weblogic-domain/domain-home-in-image/create-domain-inputs.yaml .

mv create-domain-inputs.yaml mycreate-domain-inputs.yaml

vi mycreate-domain-inputs.yaml

(change values in lines #16, #57, #65, #70, #104, #107 appropriately) Here the one I utilised just in case it helps

And now let’s create the domain with the image:

cd kubernetes/samples/scripts/create-weblogic-domain/domain-home-in-image

./create-domain.sh -i ~/Downloads/weblogic-kubernetes-operator/mycreate-domain-inputs.yaml -o ~/Downloads/weblogic-kubernetes-operator/output -u weblogic -p welcome1 -e

Verify that everything ig working!

kubectl get po -ns sample-domain1-ns

kubectl get svc -ns sample-domain1-ns

Change the type of the cluster and adminserver services to LoadBalancer:

kubectl edit svc/sample-domain1-cluster-cluster-1 -n sample-domain1-ns

kubectl edit svc/sample-domain1-admin-server-external -n sample-domain1-ns
Use vi commands

Verify and write down the public IP’s of the AdminServer external service and the cluster:

kubectl get svc -ns sample-domain1-ns

Create a simple java app and package it:

mvn archetype:generate -DgroupId=javiermugueta.blog -DartifactId=java-web-project -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

mnv package

Open a browser, log in WLS AdminServer console and deploy your app (use the public IP of the AdminsServer service):

Open a new browser tab and test the app (use the public IP of the WLS cluster service):

That’s all folks, hope it helps!! 🙂

Shared Disk Seen by Pods Deployed in two Independent OKE Clusters across two Cloud Regions | Remote Network Peering


In this episode we are creating 2 independent OKE clusters, one in Frankfurt and the other in Phoenix, and then we will create a File System in Frankfurt (kind of NFS server) that will act as repository for a shared persistent volume reachable by all the pods of a deployment deployed to both clusters.

Remote Peering

Oracle Cloud Infrastructure networking provides “Remote Peering”, which allows the connection between networks (Virtual Cloud Networks -VCN-) in two different cloud regions.

Peering the 2 VCN’s

Let’s create one VCN in Frankfurt and other in another region, Phoenix in my case.

IMPORTANT: VCN CIDR’s must not overlap!

Now create a DRG in Frankfurt, then create a Remote Peering Connection (RPC):

Do the same in Phoenix and grab the OCID of the new RPC created, we’ll need it in the next step:

Come back to the RPC in Frankfurt, click [Establish Connection], select the region and paste the OCID of the remote RPC:

After a while you should be the status PEERED in both RPC’s:

Now, attach the VCN to the DRG in both sides:

So far, sogood! The 2 VCN’s are peered, now let’s manage how the networks can reach each other, how? – by routing them! We are going to create the routes with OCI CLI (because at the moment of this writing I wasn’t able to create them with the GUI). To do it grab previously the following info from both regions:

  • compartment-OCID
  • vcn-OCID
  • drg-OCIID

Now let’s create a route from fra to phx:

oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"192.168.0.0/16","networkEntityId":"[yourdrgocid]"}]'

And now from phx to fra:

oci network route-table create --compartment-id [yourcompartmentocid] --vcn-id [yourvcnocid] --route-rules '[{"cidrBlock":"10.0.0.0/16","networkEntityId":"[yourdrgocid]"}]' --region us-phoenix-1

Please note the CIDR block parameter I’ve put (remarked in bold), which in this case is the CIDR of the whole VCN, because we want to route to all the subnets on each side. The routes created look like this:

Now we must modify the routes created on each region and add a rule for the nodes in the private subnet so that they can reach the internet via a NAT gateway, because if not, k8s can’t reach the docker container repo I’m using (you must create one NAT gateway on each side just in case you have not already done it):

Now we must assign to each one of the private subnets on each VCN the route tables created on each region:

Now create a K8S cluster on each region (use the custom option because you must select the VCN’s you have created previously).

Now follow this post to create the shared file system in Frankfurt.

One more thing, configure security list to allow traffic, NFS ports are:

UDP: 111,1039,1047,1048,2049
TCP: 111,1039,1047,1048,2049

Soo faaar sooo goood! We have the VCN networks peered, DRG’s created, VCN’s attached to DRG’s, routes created, NFS traffic allowed, storage ready and k8s clustered created!

Finally deploy this to both k8s clusters. NOTE: modify the yaml with the specific values of IP and export of your own File System:

kubectl apply -f k8spod2nfs.yaml

Now ssh to one of the pods in the Phoenix cluster and verify you can see the shared content. Then modify the index.html file and dump the content of the file. Finally get the public IP of the service created.

MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
k8spod2nfs-6c6665479f-b6q8j   1/1     Running   0          1m
k8spod2nfs-6c6665479f-md5s5   1/1     Running   0          1m
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6c6665479f-b6q8j bash
root@k8spod2nfs-6c6665479f-b6q8j:/# cd /usr/share/nginx/html/
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# ls
file.txt  index.html  index.html.1
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# echo hi >> index.html
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# cat index.html
hola
adios
hi
root@k8spod2nfs-6c6665479f-b6q8j:/usr/share/nginx/html# exit
MacBook-Pro:k8spod2nfs javiermugueta$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
k8spod2nfs   LoadBalancer   10.96.133.154   129.146.208.7   80:31728/TCP   8m
kubernetes   ClusterIP      10.96.0.1                 443/TCP        48m

Open a browser and put the public IP of the service:

Now get the public IP of the service created in Frankfurt, open a browser and see what happens:

It shows same content, awesome!!

And last, just in case you don’t trust, change again the content of index.html within a pod in the Frankfurt side:

MacBook-Pro:k8spod2nfs javiermugueta$ kubectl exec -it k8spod2nfs-6f48c6464f-2447d bash
root@k8spod2nfs-6f48c6464f-2447d:/# cd /usr/share/nginx/html/
root@k8spod2nfs-6f48c6464f-2447d:/usr/share/nginx/html# echo bye >> index.html

It seems is working fine, haha!!

That’s all folks! 🙂