From January 2019

Connecting Oracle Integration Cloud Service (aka OIC) to API Platform Cloud Service


Integration

Oracle Integration Cloud Service is an integration platform consisting in several tools such as:

  • Integration: a low code but extensible integration environment with many out of the box connector adapters and many more in the marketplace
  • Process: a Business Process Managament and Business Rules environment
  • Visual Builder: a visual low code development environment

API Management

Oracle API Platform Cloud Serviceis a API life cycle management solution. It allows you to plan, specify, implement, deploy, entitle, policy enforcement, publish, grant and observe (analytics) your API’s.

Putting things together

OIC has a setup that allows connecting it to an API Platform instance easily. Let’s have a look:

  1. Create an ICS and an API platform service instances. Grab the fdqn url of the API Platform instance portal
  2. Go to your ICS service instance portal and, in the [Integrations] menu, go to [Configuration] and then [API Platform]
  3. Enter the information of the existing API Platform instance you created before and click on [Save]

api1.png

Config is ready, and now what else?

Go to the lists of integrations in ICS and select one that is finished and ready to activate and click on the [Activate] button or menu.

api2

As you can see, the connection between ICS and API Platform is working, and we are proposed to choose what to do. We can activate the integration (only in ICS) or activate in ICS and publish to API Platform.

Let’s clink on [Activate and Publish…]

api3.png

You have two options, create a new API or add to an existing one. Leave unchecked the Deploy and Publish options. Once you have finished entering the values, click [Create]. Integration is activated and a new option appears in the hamburger menu:

api4.png

Now, go to the API Platform portal instance and notice the new API created:

api5

api6

We haven’t deployed the API to a gateway, therefore it is not yet ready for consumption thought API Platform, that is something we’ll explain in coming post. For the moment let’s keep in mind that the integration implemented in ICS has been deployed to API platform and is ready for start applying several API management best practices, such as applying security, traffic, interface or routing policies.

api7.png

That’s all folks!

Enjoy 😉

 

 

Creating a Fast&Simple Container for Sending Messages to a Topic in Oracle Event Hub Cloud Service (aka OEHCS, which is a Kafka cluster) and Deploying it to Kubernetes Cluster


The container uses 4 environment variables, you can find a container already built for you here

SOURCE CODE OF THE PRODUCER

var sleep = require('system-sleep');

const oehcs_connect_url = process.env.OEHCS_CONNECTURL

const topic_name = process.env.TOPIC_NAME

const num_partitions = process.env.NUM_PARTITIONS

const message = process.env.MESSAGE

var kafka = require('kafka-node'),

HighLevelProducer = kafka.HighLevelProducer,

client = new kafka.KafkaClient({kafkaHost: oehcs_connect_url}),

producer = new HighLevelProducer(client);

var i = 0;

while (i >= 0 ){

var payloads = [{ topic: topic_name, messages: message , partition: i}];

//producer.on('ready', function () {

producer.send(payloads, function (err, data) {

console.log(data);

});

// });

i = i + 1;

if (i > num_partitions -1){

i = 0;

sleep( 1 );

}

}

THE DOCKERFILE

FROM oraclelinux:7-slim
WORKDIR /app
ADD . /app
RUN curl --silent --location https://rpm.nodesource.com/setup_10.x | bash -
RUN yum -y install nodejs npm
CMD ["node","producer-direct.js"]

THE YAML FOR K8S DEPLOYMENT

apiVersion: apps/v1

kind: Deployment

metadata:

name: oehcsnodeproducer-direct

labels:

app: oehcsnodeproducer-direct

spec:

replicas: 1

selector:

matchLabels:

app: oehcsnodeproducer-direct

strategy:

type: Recreate

template:

metadata:

labels:

app: oehcsnodeproducer-direct

spec:

containers:

- image: javiermugueta/oehcsnodeproducer-direct

env:

- name: OEHCS_CONNECTURL

value: "<ip1>:6667,<ip2>:6667,..."

- name: TOPIC_NAME

value: "R1"

- name: NUM_PARTITIONS

value: "10"

- name: MESSAGE

value: "{'put here what you want'}"

name: oehcsnodeproducer-direct

TEST IT AND SEE WHAT HAPPENS

Create the deployment and after 10 minutes take a look to production messages ratio:
kubectl apply -f my.yaml
oehcs-scale1.png
More or less 400/second…
Scale the deployment and take a look to new production ratios:
kubectl scale deployment oehcsnodeproducer-direct --replicas=2
Around 8000 messages/second!
Now add 9 partitions to the topic and take a look to new ratios:
add partitions
With 2 pods running and 10 partitions we are producing around 10K messages per second! As you can see partitioning improves performance!
10partitions.png
Let’s double the number of pods and see new ratios:
kubectl scale deployment oehcsnodeproducer-direct --replicas=4
And now 18K/second!
18k
That’s all folks!
Enjoy 😉

Mirroring a Topic Between 2 Oracle Event Hub (Kafka) Clusters in 20 Minutes


Oracle Event Hub Cloud Service (OEHCS) is a Kafka managed PaaS cloud service. In a few minutes you can provision a full Kafka cluster ready for creating topics for sending and consuming messages.

mirror

In this post we will configure a feature called ‘mirroring’ that allows you to replicate the messages that are sent to a topic in a source cluster to another topic in a destination cluster.

FIRST

Create 2 OEHCS instances, the names and sizes you want. See this post for help (coming soon).

oehcs1

Create a topic in the origin cluster, name it R1.

oehcs2

Create a topic in the destination cluster, name it R1R1.

oehcs3

SECOND

Grab the public IP of one of the brokers in the destination cluster. ssh to that server and follow what is stated here (except for the point number 10)

oehcs5

Configuration files examples:

mkdir -p /u01/oehpcs/confluent/etc/mirror-maker

vi sourceClusterConsumer.config

bootstrap.servers=130.61.36.87:6667,130.61.86.29:6667
group.id=replica-consumer
exclude.internal.topics=true
auto.offset.reset=earliest
partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor

vi targetClusterProducer.config

bootstrap.servers=130.61.46.183:6667,130.61.80.227:6667
acks=-1
max.in.flight.requests.per.connection=1
compression.type=none

Message for hackers: The IP’s were real one day but they don’t exist any more 😉

THIRD

Execute this command:

/u01/oehpcs/confluent/bin/kafka-mirror-maker.sh --consumer.config /u01/oehpcs/confluent/etc/mirror-maker/sourceClusterConsumer.config --producer.config /u01/oehpcs/confluent/etc/mirror-maker/targetClusterProducer.config --num.streams 2 --whitelist "R.*" --message.handler kafka.tools.OehcsTopicSuffixMirrorMakerHandler --message.handler.args R1

FOURTH

Produce messages to R1 topic in origin cluster the way you can. See this post for help.

oehcs6

Please note that a yellow band appears in the Throughput graph indicating that bytes are  going out the topic, who is extracting messages?

FIFTH

Notice that the R1R1 topic in destination cluster is receiving messages.

oehcs7

So far so good, we have replicated the topic in another cluster thousand miles away…

Stop the mirroring process, add partitions to R1R1 topic up to 20 and start the mirroring again with this new command:

/u01/oehpcs/confluent/bin/kafka-mirror-maker.sh --consumer.config /u01/oehpcs/confluent/etc/mirror-maker/sourceClusterConsumer.config --producer.config /u01/oehpcs/confluent/etc/mirror-maker/targetClusterProducer.config --num.streams 20 --whitelist "R.*" --message.handler kafka.tools.OehcsTopicSuffixMirrorMakerHandler --message.handler.args R1

mirrorscaled.png

As you can see now the mirroring runs faster!

That’s all folks!

Enjoy 😉

 

 

Containerizing​ kubectl for working with OKE (Oracle Kubernetes Engine)


kubectl is one of the command line interfaces for managing k8s. In this post, we are containerizing kubectl for easy utilization across different environments.

Because kubectl in OKE is related with OCI CLI, we are setting up the kubectl tool by adding it to ocloudshell container we created in this post.

STEP 1

Configure OCI CLI as mentioned in the referenced post

STEP 2

Download kubectl and put it in the ocloudshell directory
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl

STEP 3

Configure kubectl for OKE following instructions you can find in the main page of each K8s cluster created in the OCI dashboard UI:

kubeconfig

STEP 4

Copy the .kube directory created in your home directory to the ocloudshell directory

STEP 5

Create Dockerfile

FROM store/oracle/serverjre:8
ENV LC_ALL=en_US.utf8 \
LANG=en_US.utf8
ARG OPC_CLI_VERSION=18.1.2
ENV OPC_CLI_PKG=opc-cli-$OPC_CLI_VERSION.zip
WORKDIR /ocloudshell/
RUN curl -o /etc/yum.repos.d/public-yum-ol7.repo http://yum.oracle.com/public-yum-ol7.repo \
&& yum-config-manager --enable ol7_developer_EPEL \
&& yum-config-manager --enable ol7_developer \
&& yum -y install unzip python-oci-cli \
&& rm -rf /var/cache/yum/*
WORKDIR /root
ADD .oci/ .oci/
RUN chmod 400 .oci/config
RUN chmod 400 .oci/oci_api_key.pem
ADD .kube/ .kube/
ADD kubectl /usr/local/bin/kubectl
RUN chmod +x /usr/local/bin/kubectl
CMD ["/bin/bash"]

STEP 6

Create and push the container to a private repo*
docker build -t yourrepo/ocloudshell .

docker push yourrepo/ocloudshell
(*) Remember, don’t push to public repo or you are putting in serious risk your environment!!

TEST IT

So far, so good. Let’s test the container executing a command, for instance:

docker run -it javiermugueta/ocloudshell kubectl get po

That’s all folks!

Enjoy 🙂

Containerizing​ Oracle Cloud Infrastructure (OCI) Command Line Interface (CLI)


Oracle Cloud Infrastructure (OCI) command line interface (CLI) is one of several methods provided for managing the Oracle cloud infra generation 2.

The CLI can be installed and configured in your local machine just following the instructions in the documentation.

In this post, we are explaining an alternate method for using the CLI by means of creating a container. Main advantage: portability and encapsulation of configurations just in case you are managing more than one tenant.

STEP 1

Install and configure OCI CLI. Please notice that a hidden folder called .oci gets created in your home directory:

ocloudshell1

STEP 2

Create a directory for creating the container, let’s call it ocloudshell for instance:

mkdir ocloudshell

STEP 3

copy ./oci directory in your home directory to ocloudshell dir

STEP 4

Edit the ./oci/config file and change the absolute paths to be relative to the home directory. For instance, in my mac is as follows:

ocloudshell2

ocloudshell3

STEP 5

Download the OPC CLI and put it in the ocloudshell dir

STEP 6

Create Dockerfile as follows:

FROM store/oracle/serverjre:8
ENV LC_ALL=en_US.utf8 \
LANG=en_US.utf8
ARG OPC_CLI_VERSION=18.1.2
ENV OPC_CLI_PKG=opc-cli-$OPC_CLI_VERSION.zip
WORKDIR /ocloudshell/
RUN curl -o /etc/yum.repos.d/public-yum-ol7.repo http://yum.oracle.com/public-yum-ol7.repo \
&& yum-config-manager --enable ol7_developer_EPEL \
&& yum-config-manager --enable ol7_developer \
&& yum -y install unzip python-oci-cli \
&& rm -rf /var/cache/yum/*
WORKDIR /root
ADD .oci/ .oci/
RUN chmod 400 .oci/config
RUN chmod 400 .oci/oci_api_key.pem
CMD ["/bin/bash"]

STEP 7

Create the container and push it to a private repository!*
docker build -t myrepo/ocloudshell .

docker push myrepo/ocloudshell
(*) Don’t push the image to a public repo for obvious security reasons

 

TEST IT

So far, so good. Let’s test the container executing a command, here is the command reference documentation. For instance, let’s execute a command for starting a VM:

docker run -it javiermugueta/ocloudshell oci compute instance action --instance-id ocid1.instance.oc1.eu-frankfurt-1.abtheljtbocj2w4qywieacalgsortabg4kep77lplqfwfmlup77725rvsjxa --action start

That’s all folks!

Enjoy 🙂